DMF Service Node Appliance
This chapter describes configuring the managed services provided by the DANZ Monitoring Fabric (DMF) Service Node Appliance.
Overview
The DANZ Monitoring Fabric (DMF) Service Node has multiple interfaces connected to traffic for processing and analysis. Each interface can be programmed independently to provide any supported managed-service actions.
To create a managed service, identify a switch interface connected to the service node, specify the service action, and configure the service action options.
Configure a DMF policy to use the managed service by name. This action causes the Controller to forward traffic the policy selects to the service node. The processed traffic is returned to the monitoring fabric using the same interface and sent to the tools (delivery interfaces) defined in the DMF policy.
If the traffic volume the policy selects is too much for a single service node interface, define an LAG on the switch connected to the service node, then use the LAG interface when defining the managed service. All service node interfaces connected to the LAG are configured to perform the same action. The traffic the policy selects is automatically load-balanced among the LAG member interfaces and distributes the return traffic similarly.
Changing the Service Node Default Configuration
config-service-node
, enter the following command from config mode on the Active DMF controller:
controller-1(config)# service-node service_node_alias
controller-1(config-service-node)#
Replace service_node_alias with the alias to use for the service node. This alias is affiliated with the hardware MAC address of the service node using the mac command. The hardware MAC address configuration is mandatory for the service node to interact with the DMF Controller.
config-service-node
submode to override the default configuration for the associated service node:
- admin password: set the password to log in to the service node as an admin user.
- banner: set the service node pre-login banner message.
- description: set a brief description.
- logging: enable service node logging to the Controller.
- mac: configure a MAC address for the service node.
- ntp: configure the service node to override default parameters.
- snmp-server: configure an SNMP trap host to receive SNMP traps from the service node.
Using SNMP to Monitor DPDK Service Node Interfaces
interfaces MIB: ❵.1.3.6.1.2.1.2❵
ifMIBObjects MIB: ❵.1.3.6.1.2.1.31.1❵
snmpget -v2c -c public 10.106.6.5 .1.3.6.1.2.1.31.1.1.1.6.105
IF-MIB::ifHCInOctets.105 = Counter64: 10008
snmpget -v2c -c public 10.106.6.5 .1.3.6.1.2.1.31.1.1.1.10.105
IF-MIB::ifHCOutOctets.105 = Counter64: 42721
[root@TestTool anet]# snmpwalk -v2c -c onlonl 10.106.6.6 .1.3.6.1.2.1.2.2.1.8.109
IF-MIB::ifOperStatus.109 = INTEGER: down(2)
[root@TestTool anet]# snmpwalk -v2c -c onlonl 10.106.6.6 .1.3.6.1.2.1.2.2.1.8.105
IF-MIB::ifOperStatus.105 = INTEGER: up(1)
Configuring Managed Services

This page displays the service node appliance devices connected to the DMF Controller and the services configured on the Controller.
Using the GUI to Define a Managed Service
To create a new managed service, perform the following steps:
Using the CLI to Define a Managed Service
To configure a service to direct traffic to a DMF service node, complete the following steps:
Monitoring Managed Services
controller-1# show managed-service-device SN-Name interfaces
controller-1# show managed-service-device SN-Name stats
For example, the following command shows the managed services handled by the Service Node Interface (SNI):


Deduplication Action
The DANZ Monitoring Fabric (DMF) Service Node enhances the efficiency of network monitoring tools by eliminating duplicate packets. Duplicate packets can be introduced into the out-of-band monitoring data stream by receiving the same flow from multiple TAP or SPAN ports spread across the production network. Deduplication eliminates these duplicate packets and enables more efficient use of passive monitoring tools.
- Full packet deduplication: deduplicates incoming packets that are identical at the L2/L3/L4 layers.
- Routed packet deduplication: as packets traverse an IP network, the MAC address changes from hop to hop. Routed packet deduplication enables users to match packet contents starting from the L3 header.
- NATed packet deduplication: to perform NATed deduplication, the service node compares packets in the configured window that are identical starting from the L4 payload. To use NATed packet deduplication, perform the following fields as required:
- Anchor: Packet Start, L2 Header Start, L3 Header Start, or L3 Payload Start fields.
- Offset: the number of bytes from the anchor where the deduplication check begins.
The time window in which the service looks for duplicate packets is configurable. Select a value among these choices: 2ms (the default), 4ms, 6ms, and 8ms.
GUI Configuration

CLI Configuration
Controller-1(config)# show running-config managed-service MS-DEDUP-FULL-PACKET
! managed-service
managed-service MS-DEDUP-FULL-PACKET
description 'This is a service that does Full Packet Deduplication'
1 dedup full-packet window 8
service-interface switch CORE-SWITCH-1 ethernet13/1
Controller-1(config)#
Controller-1(config)# show running-config managed-service MS-DEDUP-ROUTED-PACKET
! managed-service
managed-service MS-DEDUP-ROUTED-PACKET
description 'This is a service that does Routed Packet Deduplication'
1 dedup routed-packet window 8
service-interface switch CORE-SWITCH-1 ethernet13/2
Controller-1(config)#
Controller-1(config)# show running-config managed-service MS-DEDUP-NATTED-PACKET
! managed-service
managed-service MS-DEDUP-NATTED-PACKET
description 'This is a service that does Natted Packet Deduplication'
1 dedup anchor-offset l4-payload-start 0 window 8
service-interface switch CORE-SWITCH-1 ethernet13/3
Controller-1(config)#
show managed-service-device
Service-Node-Name stats dedup-service-name
dedup
.
Controller-1(config)# show managed-service-device DMF-SN-R740-1 stats MS-DEDUP dedup
~~~~~~~~~~~~~~~~ Stats ~~~~~~~~~~~~~~~~
Interface Name : sni16
Function : dedup
Service Name : MS-DEDUP
Rx packets : 9924950
Rx bytes : 4216466684
Rx Bit Rate : 1.40Gbps
Applied packets : 9923032
Applied bytes : 4216337540
Applied Bit Rate : 1.40Gbps
Tx packets : 9796381
Tx bytes : 4207106113
Tx Bit Rate : 1.39Gbps
Deduped frame count : 126651
Deduped percent : 1.2763336851075358
Load : low
Controller-1(config)#
Header Strip Action
This action removes specific headers from the traffic selected by the associated DANZ Monitoring Fabric (DMF) policy. Alternatively, define custom header stripping based on the starting position of the Layer-3 header, the Layer-4 header, the Layer-4 payload, or the first byte in the packet.
- decap-erspan: remove the Encapsulated Remote Switch Port Analyzer (ERSPAN) header.
- decap-cisco-fabric-path: remove the Cisco FabricPath protocol header.
- decap-l3-mpls: remove the Layer-3 Multi-protocol Label Switching (MPLS) header.
- decap-lisp: remove the LISP header.
- decap-vxlan [udp-port vxlan port]: remove the Virtual Extensible LAN (VXLAN) header.
- decap-geneve: remove the Geneve header.
- l3-header-start
- l4-header-start
- l4-payload-start
- packet-start
Input a positive integer representing the offset from which the strip action begins. When omitting an offset, the header stripping starts from the first byte in the packet.
GUI Configuration

After assigning the required actions to the header stripping service, select Next or Post-Service Match.

CLI Configuration
The header-strip service action strips the header and replaces it in one of the following ways:
- Add the original L2 src-mac, and dst-mac.
- Add the original L2 src-mac, dst-mac, and ether-type.
- Specify and add a custom src-mac, dst-mac, and ether-type.
The following are examples of custom header stripping:
! managed-service
managed-service MS-HEADER-STRIP-1
1 header-strip packet-start 20 add-original-l2-dstmac-srcmac
service-interface switch CORE-SWITCH-1 ethernet13/1
! managed-service
managed-service MS-HEADER-STRIP-2
1 header-strip packet-start 20 add-original-l2-dstmac-srcmac-ethertype
service-interface switch CORE-SWITCH-1 ethernet13/2
! managed-service
managed-service MS-HEADER-STRIP-3
1 header-strip packet-start 20 add-custom-l2-header 00:11:01:02:03:04 00:12:01:02:03:04
0x800
service-interface switch CORE-SWITCH-1 ethernet13/3
Configuring the Post-service Match
- The fabric can remain in L3/L4 mode. It is not necessary to change to offset match mode.
- Easier configuration.
- All match conditions are available for the inner packet.
- The policy requires only one managed service to perform the strip service action.
! managed-service
managed-service MS-HEADER-STRIP-4
service-interface switch CORE-SWITCH-1 interface ethernet1
1 decap-l3-mpls
!
post-service-match
1 match ip src-ip 1.1.1.1
2 match tcp dst-ip 2.2.2.0 255.255.255.0
! policy
policy POLICY-1
filter-interface TAP-1
delivery-interface TOOL-1
use-managed-service MS-HEADER-STRIP-4 sequence 1
IPFIX and Netflow Actions
IP Flow Information Export (IP FIX), also known as NetFlow v10, is an IETF standard defined in RFC 7011. The IPFIX generator (agent) gathers and transmits information about flows, which are sets of packets that contain all the keys specified by the IPFIX template. The generator observes the packets received in each flow and forwards the information to the IPFIX collector (server) in the form as a flowset.
Starting with the DANZ Monitoring Fabric (DMF)-7.1.0 release, NetFlow v9 (Cisco proprietary) and IPFIX/NetFlow v10 are both supported. Configuration of the IPFIX managed service is similar to configuration for earlier versions of NetFlow except for the UDP port definition. NetFlow v5 collectors typically listen over UDP port 2055, while IFPIX collectors listen over UDP port 4739.
NetFlow records are typically exported using User Datagram Protocol (UDP) and collected using a flow collector. For a NetFlow service, the service node takes incoming traffic and generates NetFlow records. The service node drops the original packets, and the generated flow records, containing metadata about each flow, are forwarded out of the service node interface.
IPFIX Template
The IPFIX template consists of the key element IDs representing IP flow, field element IDs representing actions the exporter has to perform over IP flows matching key element IDs, the template ID number for uniqueness, collector information, and eviction timers.
To define a template, configure keys of interest representing the IP flow and fields that identify the values measured by the exporter, the exporter information, and the eviction timers. To define the template, select the ipfix-template template-name
command in config mode, replacing template-name with a unique identifier for the template instance.
IPFIX Keys
- destination-ipv4-address
- destination-ipv6-address
- destination-mac-address
- destination-transport-port
- dot1q-priority
- dot1q-vlan-id
- ethernet-type
- icmp-type-code-ipv4
- icmp-type-code-ipv6
- ip-class-of-service
- ip-diff-serv-code-point
- ip-protocol-identifier
- ip-ttl
- ip-version
- policy-vlan-id
- records-per-dmf-interface
- source-ipv4-address
- source-ipv6-address
- source-mac-address
- source-transport-port
- vlan id
- The Controller will not allow the key combination of source-mac-address and records-per-dmf-interface in push-per-policy mode.
- The Controller will not allow the key combinations of policy-vlan-id and records-per-dmf-interface in push-per-filter mode.
IPFIX Fields
A field defines each value updated for the packets the generator receives that match the specified keys. For example, include fields in the template to record the number of packets, the largest and smallest packet sizes, or the start and end times of the flows. To see a listing of the fields supported in the current release of the DANZ Monitoring Fabric (DMF) Service Node, select the help in config-ipxif-template submode. The following are the fields supported:
option from the DMF GUI, or type- flow-end-milliseconds
- flow-end-reason
- flow-end-seconds
- flow-start-milliseconds
- flow-start-seconds
- maximum-ip-total-length
- maximum-layer2-total-length
- maximum-ttl
- minimum-ip-total-length
- minimum-layer2-total-length
- minimum-ttl
- octet-delta-count
- packet-delta-count
- tcp-control-bits
Active and Inactive Timers
After the number of minutes specified by the active timer, the flow set is closed and forwarded to the IPFIX collector. The default active timer is one minute. During the number of seconds set by the inactive timer, if no packets that match the flow definition are received, the flow set is closed and forwarded without waiting for the active timer to expire. The default value for the inactive time is 15 seconds.
Example Flowset

The following is a running-config that shows the IPFIX template used to generate this flowset.
Example IPFIX Template
! ipfix-template
ipfix-template Perf-temp
template-id 22222
key destination-ipv4-address
key destination-transport-port
key dot1q-vlan-id
key source-ipv4-address
key source-transport-port
field flow-end-milliseconds
field flow-end-reason
field flow-start-milliseconds
field maximum-ttl
field minimum-ttl
field packet-delta-count
Using the GUI to Define an IPFIX Template
Using the CLI to Define an IPFIX Template
Using the GUI to Define an IPFIX Service Action
Select IPFIX from the Action selection list on the page.

- Assign a delivery interface.
- Configure the collector IP address.
- Identify the IPFIX template.
- Inactive timeout: the interval of inactivity that marks a flow inactive.
- Active timeout: length of time between each IPFIX flows for a specific flow.
- Source IP: source address to use for the IPFIX flowsets.
- UDP port: UDP port to use for sending IPFIX flowsets.
- MTU: MTU to use for sending IPFIX flowsets.
After completing the configuration, select Next, and then select Save.
Using the CLI to Define an IPFIX Service Action
controller(config)# managed-service MS-IPFIX-SERVICE
controller(config-managed-srv)# 1 ipfix TO-DELIVERY-INTERFACE
controller(config-managed-srv-ipfix)# collector 10.106.1.60
controller(config-managed-srv-ipfix)# template IPFIX-TEMPLATE
The active-timeout and inactive-timeout commands are optional
controller1# show running-config managed-service MS-IPFIX-ACTIVE
! managed-service
managed-service MS-IPFIX-ACTIVE
service-interface switch CORE-SWITCH-1 ethernet13/1
!
1 ipfix TO-DELIVERY-INTERFACE
collector 10.106.1.60
template IPFIX-TEMPLATE
config# show running-config ipfix-template
! ipfix-template
ipfix-template IPFIX-IP
template-id 1974
key destination-ipv4-address
key destination-ipv6-address
key ethernet-type
key source-ipv4-address
key source-ipv6-address
field flow-end-milliseconds
field flow-end-reason
field flow-start-milliseconds
field minimum-ttl
field tcp-control-bits
------------------------output truncated------------------------
Records Per Interface Netflow using DST-MAC Rewrite
Destination MAC rewrite for the records-per-interface NetFlow and IPFIX feature is the default setting and applies to switches running Extensible Operating System (EOS) and SWL and is supported on all platforms.
A configuration option exists for using src-mac
when overwriting the dst-mac
isn't preferred.
Configurations using the CLI
Global Configuration
records-per-interface
. The following example illustrates using rewrite-src-mac
or rewrite-dst-mac
in conjunction with the filter-mac-rewrite
command.
c1(config)# filter-mac-rewrite rewrite-src-mac
c1(config)# filter-mac-rewrite rewrite-dst-mac
Netflow Configuration
c1(config)# managed-service ms1
c1(config-managed-srv)# 1 netflow
c1(config-managed-srv-netflow)# collector 213.1.1.20 udp-port 2055 mtu 1024 records-per-interface
IPFIX Configuration
c1(config)# ipfix-template i1
c1(config-ipfix-template)# field maximum-ttl
c1(config-ipfix-template)# key records-per-dmf-interface
c1(config-ipfix-template)# template-id 300
c1(config)# managed-service ms1
c1(config-managed-srv)# 1 ipfix
c1(config-managed-srv-ipfix)# template i1
Show Commands
NetFlow Show Commands
show running-config managed-service
command to view the NetFlow settings.
c1(config)# show running-config managed-service
! managed-service
managed-service ms1
!
1 netflow
collector 213.1.1.20 udp-port 2055 mtu 1024 records-per-interface
IPFIX Show Commands
show ipfix-template i1
command to view the IPFIX settings.
c1(config)# show ipfix-template i1
~~~~~~~~~~~~~~~~~~ Ipfix-templates~~~~~~~~~~~~~~~~~~
# Template Name KeysFields
-|-------------|-------------------------|-----------|
1 i1records-per-dmf-interface maximum-ttl
c1(config)# show running-config managed-service
! managed-service
managed-service ms1
!
1 ipfix
template i1
Limitations
-
The
filter-mac-rewrite rewrite-src-mac
command cannot be used on thefilter interface
that is part of the policy usingtimestamping replace-src-mac
. However, the command works when using atimestamping add-header-after-l2
configuration.
Packet-masking Action
The packet-masking action can hide specific characters in a packet, such as a password or credit card number, based on offsets from different anchors and by matching characters using regular (regex) expressions.
The mask service action applies the specified mask to the matched packet region.
GUI Configuration

CLI Configuration
Controller-1(config)# show running-config managed-service MS-PACKET-MASK
! managed-service
managed-service MS-PACKET-MASK
description "This service masks pattern matching an email address in payload with X"
1 mask ([a-zA-Z0-9._-]+@[a-zA-Z0-9._-]+.[a-zA-Z0-9_-]+)
service-interface switch CORE-SWITCH-1 ethernet13/1
Arista Analytics Node Capability
Arista Analytics Node capabilities are enhanced to handle NetFlow V5/V9 and IPFIX Packets. All these flow data are represented with the Netflow index.

NetFlow records are exported using User Datagram Protocol (UDP) to one or more specified NetFlow collectors. Use the DMF Service Node to configure the NetFlow collector IP address and the destination UDP port. The default UDP port is 2055.
Configuring the Arista Analytics Node Using the GUI
From the Arista Analytics Node dashboard, apply filter rules to display specific flow information.
- Delivery interface: interface to use for delivering NetFlow records to collectors.
Note: The next-hop address must be resolved for the service to be active.
- Collector IP: identify the NetFlow collector IP address.
- Inactive timeout: use the inactive-timeout command to configure the interval of inactivity before NetFlow times out. The default is 15 seconds.
- Source IP: specify a source IP address to use as the source of the NetFlow packets.
- Active timeout: use active timeout to configure a period that a NetFlow can be generated continuously before it is automatically terminated. The default is one minute.
- UDP port: change the UDP port number used for the NetFlow packets. The default is 2055.
- Flows: specify the maximum number of NetFlow packets allowed. The allowed range is 32768 to 1048576. The default is 262144.
- Per-interface records: identify the filter interface where the NetFlow packets were originally received. This information can be used to identify the hop-by-hop path from the filter interface to the NetFlow collector.
- MTU: change the Maximum Transmission Unit (MTU) used for NetFlow packets.

Configuring the Arista Analytics Node Using the CLI
Enter the 1 netflow command and identify the configuration name and the submode changes to the config-managed-srv-netflow mode for viewing and configuring a specific NetFlow configuration.
The DMF Service Node replicates NetFlow packets received without changing the source IP address. Packets that do not match the specified destination IP address and packets that are not IPv4 or UDP are passed through. To configure a NetFlow-managed service, perform the following steps:
Pattern-drop Action
The pattern-drop service action drops matching traffic.
- URLs and user agents in the HTTP header
- patterns in BitTorrent packets
- encapsulation headers for specific parameters, including GTP, VXLAN, and VN-Tag
- subscriber device IP (user-endpoint IP)
Pattern matching allows Session-aware Adaptive Packet Filtering (SAPF) to identify HTTPS transactions on non-standard SSL ports. It can filter custom applications and separate control traffic from user data traffic.
Pattern matching is also helpful in enforcing IT policies, such as identifying hosts using unsupported operating systems or dropping unsupported traffic. For example, the Windows OS version can be identified and filtered based on the user-agent field in the HTTP header. The user-agent field may appear at variable offsets, so a regular expression search is used to identify the specified value wherever it occurs in the packet.
GUI Configuration

CLI Configuration
Controller-1(config)# show running-config managed-service MS-PACKET-MASK
! managed-service
managed-service MS-PACKET-MASK
description "This service drops traffic that has an email address in its payload"
1 pattern-drop ([a-zA-Z0-9._-]+@[a-zA-Z0-9._-]+.[a-zA-Z0-9_-]+)
service-interface switch CORE-SWITCH-1 ethernet13/1
Pattern-match Action
The pattern-match service action matches and forwards matching traffic and is similar to the pattern-drop service action.
- URLs and user agents in the HTTP header
- patterns in BitTorrent packets
- encapsulation headers for specific parameters including, GTP, VXLAN, and VN-Tag
- subscriber device IP (user-endpoint IP)
- Pattern matching allows Session Aware Adaptive Packet Filtering and can identify HTTPS transactions on non-standard SSL ports. It can filter custom applications and can separate control traffic from user data traffic.
Pattern matching allows Session-aware Adaptive Packet Filtering (SAPF) to identify HTTPS transactions on non-standard SSL ports. It can filter custom applications and separate control traffic from user data traffic.
Pattern matching is also helpful in enforcing IT policies, such as identifying hosts using unsupported operating systems or dropping unsupported traffic. For example, the Windows OS version can be identified and filtered based on the user-agent field in the HTTP header. The user-agent field may appear at variable offsets, so a regular expression search is used to identify the specified value wherever it occurs in the packet.
GUI Configuration

CLI Configuration
Use the pattern-match pattern keyword to enable the pattern-matching service action. Specify the pattern to match for packets to submit to the packet slicing operation.
Windows NT
5.(0-1)
anywhere in the packet and delivers the packets to the delivery interface TOOL-PORT-TO-WIRESHARK-1. This service is optional and is applied to TCP traffic to destination port 80.
! managed-service
managed-service MS-PATTERN-MATCH
description 'regular expression filtering'
1 pattern-match 'Windows\\sNT\\s5\\.[0-1]'
service-interface switch CORE-SWITCH-1 ethernet13/1
! policy
policy PATTERN-MATCH
action forward
delivery-interface TOOL-PORT-TO-WIRESHARK-1
description 'match regular expression pattern'
filter-interface TAP-INTF-FROM-PRODUCTION
priority 100
use-managed-service MS-PATTERN-MATCH sequence 1 optional
1 match tcp dst-port 80
Slice Action
- Packet start
- L3 header start
- L4 header start
- L4 payload start
GUI Configuration

This page allows inserting an additional header containing the original header length.
CLI Configuration
! managed-service
managed-service my-service-name
1 slice l3-header-start 20 insert-original-packet-length
service-interface switch DMF-CORE-SWITCH-1 ethernet20/1
! managed-service
managed-service MS-SLICE-1
description 'slicing service'
1 slice l4-payload-start 1
service-interface switch DMF-CORE-SWITCH-1 ethernet40/1
! policy
policy slicing-policy
action forward
delivery-interface TOOL-PORT-TO-WIRESHARK-1
description 'remove payload'
filter-interface TAP-INTF-FROM-PRODUCTION
priority 100
use-managed-service MS-SLICE-1 sequence 1 optional
1 match tcp dst-ip 10.2.19.119 255.255.255.255 src-port 80
.Packet Slicing on the 7280 Switch
This feature removes unwanted or unneeded bytes from a packet at a configurable byte position (offset). This approach is beneficial when the data of interest is situated within the headers or early in the packet payload. This action reduces the volume of the monitoring stream, particularly in cases where payload data is not necessary.
Another use case for packet slicing (slice action) can be removing payload data to ensure compliance with the captured traffic.
Within the DANZ Monitoring Fabric (DMF) fabric, two types of slice-managed services (packet slicing service) now exist. These types are distinguished based on whether installing the service on a service node or on an interface of a supported switch. The scope of this document is limited to the slice-managed service configured on a switch. The managed service interface is the switch interface used to configure this service.
All DMF 8.4 and above compatible 7280 switches support this feature. Use the show switch all property
command to check which switch in DMF fabric supports this feature. The feature is supported if the Min
Truncate Offset
and Max Truncate Offset
properties have a non-zero value.
# show switch all property
# Switch Min Truncate Offset...Max Truncate Offset
-|------|-------------------| ... |---------------------------------
1 7280 100...9236
2 core1 ...
Using the CLI to Configure Packet Slicing - 7280 Switch
slice-managed
service on a switch using the following steps.This feature requires the service interface to be in MAC loopback mode.
Once a managed service for slice action exists, any policy can use it.
Key points to consider while configuring the slice
action on a supported switch:
- Only the
packet-start
anchor is supported. - Ensure the offset is within the Min/Max truncate size bounds reported by the
show switch all property
command. If the configured value is beyond the bound, then DMF chooses the closest value of the range.For example, if a user configures the offset as 64, and the min truncate offset reported by switch properties is 100, then the offset used is 100. If the configured offset is 10,000 and the max truncate offset reported by the switch properties is 9236, then the offset used is 9236.
- A configured offset for slice-managed service includes FCS when programmed on a switch interface, which means an offset of 100 will result in a packet size of 96 bytes (accounting for 4-byte FCS).
- Configuring an offset below 17 is not allowed.
- The same service interface cannot chain multiple managed services.
- The insert-original-packet-length option is not applicable for switch-based slice-managed service.
CLI Show Commands
Use the show policy policy name
command to see the runtime state of a policy using the slice-managed service. The command shows the service interface information and stats.
Controller# show policy packet-slicing-policy
Policy Name: packet-slicing-policy
Config Status: active - forward
Runtime Status : installed
Detailed Status: installed - installed to forward
Priority : 100
Overlap Priority : 0
# of switches with filter interfaces : 1
# of switches with delivery interfaces : 1
# of switches with service interfaces: 1
# of filter interfaces : 1
# of delivery interfaces : 1
# of core interfaces : 0
# of services: 1
# of pre service interfaces: 1
# of post service interfaces : 1
Push VLAN: 1
Post Match Filter Traffic: -
Total Delivery Rate: -
Total Pre Service Rate : -
Total Post Service Rate: -
Overlapping Policies : none
Component Policies : none
Runtime Service Names: packet-slicing-7280
Installed Time : 2023-08-09 19:00:40 UTC
Installed Duration : 1 hour, 17 minutes
~ Match Rules ~
# Rule
-|-----------|
1 1 match any
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Filter Interface(s) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# DMF IF Switch IF Name State Dir Packets Bytes Pkt Rate Bit Rate Counter Reset Time
-|------|------|-----------|-----|---|-------|-----|--------|--------|------------------------------|
1 f1 7280 Ethernet2/1 uprx0 0 0-2023-08-09 19:00:40.305000 UTC
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Delivery Interface(s) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# DMF IF Switch IF Name State Dir Packets Bytes Pkt Rate Bit Rate Counter Reset Time
-|------|------|-----------|-----|---|-------|-----|--------|--------|------------------------------|
1 d1 7280 Ethernet3/1 uptx0 0 0-2023-08-09 19:00:40.306000 UTC
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Service Interface(s) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Service nameRole Switch IF NameState Dir Packets Bytes Pkt Rate Bit Rate Counter Reset Time
-|-------------------|----|------|------------|-----|---|-------|-----|--------|--------|------------------------------|
1 packet-slicing-7280 pre7280 Ethernet10/1 uptx0 0 0-2023-08-09 19:00:40.305000 UTC
2 packet-slicing-7280 post 7280 Ethernet10/1 uprx0 0 0-2023-08-09 19:00:40.306000 UTC
~ Core Interface(s) ~
None.
~ Failed Path(s) ~
None.
Use the show managed-services
command to view the status of all the managed services, including the packet-slicing managed service on a switch.
Controller# show managed-services
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Managed-services ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Service NameSwitch Switch Interface Installed Max Post-Service BW Max Pre-Service BW Total Post-Service BW Total Pre-Service BW
-|-------------------|------|----------------|---------|-------------------|------------------|---------------------|--------------------|
1 packet-slicing-7280 7280 Ethernet10/1 True400Gbps 400Gbps80bps 80bps
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Actions of Service Names ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Service NameSequence Service Action Slice Anchor Insert original packet length Slice Offset
-|-------------------|--------|--------------|------------|-----------------------------|------------|
1 packet-slicing-7280 1slicepacket-start False 101
Using the GUI to Configure Packet Slicing - 7820 Switch
Managed Service Configuration
Interface Loopback Configuration
The managed service interface used for slice action must be in MAC loopback mode.
Policy Configuration
Troubleshooting Packet Slicing
The show switch all property
command provides upper and lower bounds of packet slicing action’s offset. If bounds are present, the feature is supported; otherwise, the switch does not support the packet slicing feature.
The show fabric errors managed-service-error
command provides information when DANZ Monitoring Fabric (DMF) fails to install a configured packet slicing managed service on a switch.
- The managed service interface is down.
- More than one action is configured on a managed service interface of the switch.
- The managed service interface on a switch is neither a physical interface nor a LAG port.
- A non-slice managed service is configured on a managed service interface of a switch.
- The switch does not support packet slicing managed service, and its interface is configured with slice action.
- Slice action configured on a switch interface is not using a packet-start anchor.
- The managed service interface is not in MAC loopback mode.
Use the following commands to troubleshoot packet-slicing issues.
Controller# show fabric errors managed-service-error
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Managed Service related error~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Error Service Name
-|---------------------------------------------------------------------------------------------------------------------------------------------|-------------------|
1 Pre-service interface 7280-Ethernet10/1-to-managed-service on switch 7280 is inactive; Service interface Ethernet10/1 on switch 7280 is downpacket-slicing-7280
2 Post-service interface 7280-Ethernet10/1-to-managed-service on switch 7280 is inactive; Service interface Ethernet10/1 on switch 7280 is down packet-slicing-7280
The show switch switch name interface interface
name dmf-stats
command provides Rx and Tx rate information for the managed service interface.
Controller# show switch 7280 interface Ethernet10/1 dmf-stats
# Switch DPID Name State Rx Rate Pkt Rate Peak Rate Peak Pkt Rate TX Rate Pkt Rate Peak Rate Peak Pkt Rate Pkt Drop Rate
-|-----------|------------|-----|-------|--------|---------|-------------|-------|--------|---------|-------------|-------------|
1 7280Ethernet10/1 down- 0128bps0 - 0128bps0 0
The show switch switch name interface interface
name stats
command provides Rx and Tx counter information for the managed service interface.
Controller# show switch 7280 interface Ethernet10/1 stats
# Name Rx Pkts Rx Bytes Rx Drop Tx Pkts Tx Bytes Tx Drop
-|------------|-------|--------|-------|-------|--------|-------|
1 Ethernet10/1 22843477 0 5140845937 0
Packet Slicing Considerations
- Managed service action chaining is not supported when using a switch interface as a managed service interface.
- When configured for a supported switch, the managed service interface for slice action can only be a physical interface or a LAG.
- When using packet slicing managed service, packets ingressing on the managed service interface are not counted in the ingress interface counters, affecting the output of the
show switch switch name interface interface name stats
andshow switch switch name interface interface name dmf-stats
commands. This issue does not impact byte counters; all byte counters will show the original packet size, not the truncated size. -
A Dynamic Overlap policy causes all DMF-8.7.0 compatible 7820 switches to append an additional
Vlan Tag Post Slice Service
action. The default behavior of a fabric deployed inPush-per-Policy
mode handlesSingle Vlan Strip
at the delivery interface, removing the additional VLAN tag and leaving the original policy VLAN tag intact. In a non-default fabric deployment (Push-per-filter
), to deliver the original packet to the tool node after a slice action, enablestrip-two-vlan
on the delivery interface.
VXLAN Stripping on the 7280R3 Switch
Virtual Extensible LAN Header Stripping
Virtual Extensible LAN (VXLAN) Header Stripping supports the delivery of decapsulated packets to tools and devices in a DANZ Monitoring Fabric (DMF) fabric. This feature removes the VXLAN header, previously established in a tunnel for reaching the TAP Aggregation switch or inherent to the tapped traffic within the DMF. Within the fabric, DMF supports the installation of the strip VXLAN service on a filter interface or a filter-and-delivery interface of a supported switch.
Platform Compatibility
For DMF deployments, the target platform is DCS-7280R3.
Use the show switch
all property
command to verify which switch in the DMF fabric supports this feature.
Strip Header
Supported
property has the value BSN_STRIP_HEADER_CAPS_VXLAN
.
# show switch all property
#: 1
Switch : lyd599
Max Phys Port: 1000000
Min Lag Port : 1000001
Max Lag Port : 1000256
Min Tunnel Port: 15000001
Max Tunnel Port: 15001024
Max Lag Comps: 64
Tunnel Supported : BSN_TUNNEL_L2GRE
UDF Supported: BSN_UDF_6X2_BYTES
Enhanced Hash Supported: BSN_ENHANCED_HASH_L2GRE,BSN_ENHANCED_HASH_L3,BSN_ENHANCED_HASH_L2,
BSN_ENHANCED_HASH_MPLS,BSN_ENHANCED_HASH_SYMMETRIC
Strip Header Supported : BSN_STRIP_HEADER_CAPS_VXLAN
Min Rate Limit : 1Mbps
Max Multicast Replication Groups : 0
Max Multicast Replication Entries: 0
PTP Timestamp Supported Capabilities : ptp-timestamp-cap-replace-smac, ptp-timestamp-cap-header-64bit,
ptp-timestamp-cap-header-48bit, ptp-timestamp-cap-flow-based,
ptp-timestamp-cap-add-header-after-l2
Min Truncate Offset: 100
Max Truncate Offset: 9236
Using the CLI to Configure VXLAN Header Stripping
Configuration
Use the following steps to configure strip-vxlan
on a switch:
- Set the optional field
strip-vxlan-udp-port
at switch configuration, and the defaultudp-port
forstrip-vxlan
is4789
. - Enable or disable
strip-vxlan
on afilter
orboth-filter-and-delivery interface
using therole both-filter-and-delivery interface-name filter-interface strip-vxlan
command.> enable # config (config)# switch switch-name (config-switch)# strip-vxlan-udp-port udp-port-number (config-switch)# interface interface-name (config-switch-if)# role both-filter-and-delivery interface-name filter-interface strip-vxlan (config-switch-if)# role both-filter-and-delivery interface-name filter-interface no-strip-vxlan (config)# show running-config
- After enabling a filter interface with
strip-vxlan
, any policy can use it. From theconfig-policy
submode, add thefilter-interface
to the policy:(config)# policy p1 (config-policy)# filter-interface filter-interface
Proceed to Show Commands.
Show Commands
show policy policy name
command to see the runtime state of a policy using a filter interface with strip-vxlan
configured. It will also show the service interface information and stats.
# show policy strip-vxlan
Policy Name: strip-vxlan
Config Status: active - forward
Runtime Status : installed
Detailed Status: installed - installed to forward
Priority : 100
Overlap Priority : 0
# of switches with filter interfaces : 1
# of switches with delivery interfaces : 1
# of switches with service interfaces: 0
# of filter interfaces : 1
# of delivery interfaces : 1
# of core interfaces : 0
# of services: 0
# of pre service interfaces: 0
# of post service interfaces : 0
Push VLAN: 1
Post Match Filter Traffic: -
Total Delivery Rate: -
Total Pre Service Rate : -
Total Post Service Rate: -
Overlapping Policies : none
Component Policies : none
Installed Time : 2024-05-02 19:54:27 UTC
Installed Duration : 1 minute, 18 secs
Timestamping enabled : False
~ Match Rules ~
# Rule
-|-----------|
1 1 match any
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Filter Interface(s) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# DMF IF Switch IF Name State Dir Packets Bytes Pkt Rate Bit Rate Counter Reset Time
-|------|------|-----------|-----|---|-------|-----|--------|--------|------------------------------|
1 f1 lyd598 Ethernet1/1 uprx0 0 0-2024-05-02 19:54:27.141000 UTC
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Delivery Interface(s) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# DMF IF Switch IF Name State Dir Packets Bytes Pkt Rate Bit Rate Counter Reset Time
-|------|------|-----------|-----|---|-------|-----|--------|--------|------------------------------|
1 d1 lyd598 Ethernet2/1 uptx0 0 0-2024-05-02 19:54:27.141000 UTC
~ Service Interface(s) ~
None.
~ Core Interface(s) ~
None.
~ Failed Path(s) ~
None.
Using the GUI to Configure VXLAN Header Stripping
Filter Interface Configuration
To configure or edit a filter interface, proceed to Interfaces from the Monitoring menu and select .


Configure a filter interface on a switch that supports strip-vxlan
.

Enable or Disable Strip VXLAN.


Policy Configuration
Create a new policy using DMF Policies and add the filter interface with strip VXLAN enabled.


Select Add port(s) under the Traffic Sources option and add the Filter Interface.

Add another delivery interface and create the policy.

Syslog Messages
There are no syslog messages associated with this feature.
Troubleshooting
The show switch all property
command provides the Strip Header Supported
property of the switch. If the value BSN_STRIP_HEADER_CAPS_VXLAN
is present, the feature is supported; otherwise, the switch does not support this feature.
The show fabric warnings feature-unsupported-on-device
command provides information when DMF fails to enable strip-vxlan
on an unsupported switch.
The show switch switch-name table
strip-vxlan-header
command provides the gentable details.
The following are examples of several failure cases:
- The filter interface is down.
- The interface with
strip-vxlan
is neither a filter interface nor a filter-and-delivery interface. - The switch does not support
strip-vxlan
. - Tunneling / UDF is enabled simultaneously with
strip-vxlan
. - Unsupported pipeline mode with
strip-vxlan
enabled (strip-vxlan
requires a specific pipeline modestrip-vxlan-match-push-vlan
).
Limitations
- When configured for a supported switch, the filter interface for
decap-vxlan
action can only be a physical interface or a LAG. - It is not possible to enable
strip-vxlan
simultaneously with tunneling / UDF. - When enabling
strip-vxlan
on one or more switch interfaces on the same switch, other filter interfaces on the same switch cannot be matched on the VXLAN header.
Session Slicing for TCP and UDP Sessions
Session-slice keeps track of TCP and UDP sessions (distinguished by source and destination IP address and port) and counts the number of packets sent in each direction (client-to-server and vice versa). After recognizing the session, the action transmits a user-configured number of packets to the tool node.
For TCP packets, session-slice tracks the number of packets sent in each direction after establishing the TCP handshake. Slicing begins after the packet count in a direction has reached the configured threshold in both directions.
For UDP packets, slicing begins after reaching the configured threshold in either direction.
By default, session-slice will operate on both TCP and UDP sessions but is configurable to operate on only one or the other.
Refer to the DANZ Monitoring Fabric (DMF) Verified Scale Guide for session-slicing performance numbers.
Configure session-slice
in managed services through the Controller as a Service Node action.
Using the CLI to Configure Session Slicing
Configure session-slice
in managed services through the Controller as a Service Node action.
Configuration Steps
The following show
commands provide helpful information.
show running-config managed-service managed
service
command helps verify whether the session-slice
configuration is complete.
dmf-controller-1(config)# show running-config managed-service managed_service_1
! managed-service
managed-service managed_service_1
!
1 session-slice
slice-after 1000
idle-timeout 60000
show managed-services managed service
command provides status information about the service.
dmf-controller-1(config)# show managed-services managed_service_1
# Service NameSwitchSwitch Interface Installed Max Post-Service BW Max Pre-Service BW Total Post-Service BW Total Pre-Service BW
-|-----------------|---------------|----------------|---------|-------------------|------------------|---------------------|--------------------|
1 managed_service_1 DCS-7050CX3-32S ethernet2/4True25Gbps25Gbps 624Kbps 432Mbps
Using the GUI to Configure Session Slicing
Timestamp Action
The timestamp service action identifies and timestamps every packet it receives with the time the service node receives the packet for matching traffic.
GUI Configuration

CLI Configuration
! managed-service
managed-service MS-TIMESTAMP-1
1 timestamp
service-interface switch CORE-SWITCH-1 ethernet15/3
UDP-Replication Action
The UDP-replication service action copies UDP messages, such as Syslog or NetFlow messages, and sends the copied packets to a new destination IP address.
Configure a rate limit when enabling UDP replication. When upgrading from a version of DANZ Monitoring Fabric (DMF) before release 6.3.1, the UDP-replication configuration is not applied until a rate limit is applied to the delivery interface.
CONTROLLER-1(config)# switch DMF-DELIVERY-SWITCH-1
CONTROLLER-1(config-switch)# interface ethernet1
CONTROLLER-1(config-switch-if)# role delivery interface-name udp-delivery-1
CONTROLLER-1(config-switch-if)# rate-limit 256000
GUI Configuration

Use the UDP-replication service to copy UDP traffic, such as Syslog messages or NetFlow packets, and send the copied packets to a new destination IP address. This function sends traffic to more destination syslog servers or NetFlow collectors than would otherwise be allowed.

For the header-strip service action only, configure the policy rules for matching traffic after applying the header-strip service action. After completing pages 1-4, select Append and enable the checkbox to apply the policy.
Select Save to save the managed service.
CLI Configuration
controller-1(config)# managed-service MS-UDP-REPLICATE-1
controller-1(config-managed-srv)# 1 udp-replicate DELIVERY-INTF-TO-COLLECTOR
controller-1(config-managed-srv-udp-replicate)#
controller-1(config-managed-srv-udp-replicate)# in-dst-ip 10.1.1.1
controller-1(config-managed-srv-udp-replicate)# out-dst-ip 10.1.2.1
Redundancy of Managed Services in Same DMF Policy
Using the GUI to Configure a Backup Managed Service
Using the CLI to Configure a Backup Managed Service
Application Identification
Using the CLI to Configure Application Identification
Configure the feature through the Controller in managed services.
Using the CLI to Configure app-id
managed-service ms
service-interface switch core1 ethernet2
!
1 app-id
collector 192.168.1.1
l3-delivery-interface app-to-analytics
After configuring the app-id, refer to the analytics node for application reports and visualizations. For instance, a flow is classified internally with the following tuple: ip, tcp, http, google, and google_maps. Consequently, the analytics node displays the most specific app ID for this flow as google_maps
under appName
.
On the Analytics Node, there are AppIDs 0-4 representing applications according to their numerical IDs. 0
is the most specific application identified in that flow, while 4
is the least. In the example above, ID 0
would be the numerical ID for google_maps
, ID
1 google
, ID
2 http
, ID
3 tcp
, and ID
4 IP address
. Use the appName
in place of these since these require an ID to name mapping to interpret.

Using the CLI to Configure app-id-filter
Using the CLI to Configure app-id and app-id-filter Combined
Follow the configuration steps described in the services earlier to configure app-id-filter
and app-id
together. However, in this case, app-id
should use a higher seq num than app-id-filter
. Thus, the traffic is processed through the app-id-filter
policy first, then through app-id
.
app-id-filter
and app-id
configuration.
! managed-service
managed-service MS1
service-interface switch CORE-SWITCH-1 ethernet2
!
!
1 app-id-filter
app facebook
filter-mode forward
!
2 app-id
collector 1.1.1.1
l3-delivery-interface L3-INTF-1
app-id
dropping all traffic except facebook, and this type of service chaining can cause a performance hit and high memory utilization.Using the GUI to Configure app-id and app-id-filter
Dynamic Signature Updates (Beta Version)
- fetch: after a successful fetch on the active Controller, it invokes the fetch RPC on the standby Controller by providing a signed HTTP URL as the source. This URL points to an internal REST API that provides the recently fetched signature file.
- delete: the active Controller invokes the delete RPC call on the standby controllers.
The Controller stores the signature files in this location: /var/lib/capture/appidsignatureupdate
.
- Verify the bundle version on the service node by entering the
show service-node app-id-bundle-version
command in the service node CLI, as shown below.Figure 65. Before Update Figure 66. After Update
CLI Show Commands
Service Node
show
command:
show service-node app-id-bundle-version
This command shows the version of the bundle in use. An app-id
or app-id-filter
instance must be configured, or an error message is displayed.
dmf-servicenode-1# show app-id bundle-version
Name : bundle_version
Data : 1.680.0-22 (build date Sep 26 2023)
dmf-servicenode-1#
Controller
To obtain more information about the running version on a Service Node, or when the last push attempt was made and the outcome, use the following Controller CLI commands:
show app-id push-results optional SN name
show service-node SN name app-id
dmf-controller-1# show app-id push-results
# NameIP Address Current Version Current Push TimePrevious Version Previous Push Time Last Attempt Version Last Attempt TimeLast Attempt Result Last Attempt Failure Reason
-|-----------------|--------------|---------------|------------------------------|----------------|------------------------------|--------------------|------------------------------|-------------------|---------------------------|
1 dmf-servicenode-1 10.240.180.124 1.660.2-332023-12-06 11:13:36.662000 PST 1.680.0-22 2023-09-29 16:21:11.034000 PDT 1.660.2-33 2023-12-06 11:13:34.085000 PST success
dmf-controller-1# show service-node dmf-servicenode-1 app-id
# NameIP Address Current Version Current Push TimePrevious Version Previous Push Time Last Attempt Version Last Attempt Time Last Attempt Result Last Attempt Failure Reason
-|-----------------|--------------|---------------|------------------------------|----------------|------------------|--------------------|-----------------|-------------------|---------------------------|
1 dmf-servicenode-1 10.240.180.124 1.680.0-222023-09-29 16:21:11.034000 PDT
show app-id signature-files
command displays the validated files that are available to push to Service Nodes.
dmf-controller-1# show app-id signature-files
# Signature-fileChecksum Fetch time
-|-----------------|-----------------|------------------------------|
1 file1.tar.gzabcdefgh12345 2023-08-02 22:20:49.422000 UTC
2 file2.tar.gzijklmnop67890 2023-08-03 07:10:22.123000 UTC
show analytics app-info
filter-interface-name
command displays aggregated information over the last 5
minutes about the applications seen on a given filter interface, sorted by unique flow count. This command also has an optional size
option to limit the number of results, default is all.
push-per-filter
mode.dmf-controller-1# show analytics app-info filter-interface f1 size 3
# App name Flow count
-|--------|----------|
1 app1 1000
2 app2 900
3 app3 800
Syslog Messages
Syslog messages for configuring the app-id
and app-id-filter
services appear in a service node’s syslog through journalctl
.
A Service Node syslog registers events for the app-id
add
, modify
, and delete
actions.
These events contain the keywords dpi and dpi-filter, which correspond to app-id
and app-id-filter
.
For example:
Adding dpi for port,
Modifying dpi for port,
Deleting dpi for port,
Adding dpi filter for port,
Modifying dpi filter for port,
Deleting dpi filter for port,
App appname does not exist - An invalid app name was entered.
The addition, modification, or deletion of app names in an app-id-filter
managed-service in the Controller node’s CLI influences the policy refresh activity, and these events register in floodlight.log
.
Scale
- Max concurrent sessions are currently set to permit less than 200,000 active flows per core. Performance may drop the more concurrent flows there are. This value is a maximum value to prevent the service from overloading. Surpassing this threshold may cause some flows not to be processed, and the new flows will not be identified or filtered. Entries for inactive flows will time out after a few minutes for ongoing sessions and a few seconds after the session ends.
- If there are many inactive sessions, DMF holds the flow contexts, reducing the number of available flows used for DPI. The timeouts are approximately 7 minutes for TCP sessions and 1 minute for UDP.
- Heavy application traffic load degrades performance.
Troubleshooting
- If IPFIX reports do not appear on an Analytics Node (AN) or Collector, ensure the UDP port is configured correctly and verify the AN receives traffic.
- If the
app-id-filter
app list does not appear, ensure a Service Node (SN) is connected using the show service-node command on the Controller. - For
app-id-filter
, enter at least one valid application from the list that appears using <Tab>. If not, the policy will fail to install with an error message app-id-filter specified without at least one name TLV identifying application. - A flow may contain other IDs and protocols when using
app-id-filter.
For example, the specific application for a flow may begoogle_maps
, but there may be protocols or broader applications under it, such asssh
,http
, orgoogle
. Addinggoogle_maps
will filter this flow. However, addingssh
will also filter this flow. Therefore, adding any of these to the filter list will cause packets of this flow to be forwarded or dropped. - An IPFIX element, BSN type 14, that existed in DMF version 8.4 was removed in 8.6.
- During a dynamic signature update, if a SN reboot occurs, it will likely boot up with the correct version. To avoid issues of traffic loss, perform the update during a maintenance window. Also, during an update, the SN will temporarily not send LLDP packets to the Controller and disconnect for a short while.
- After a dynamic signature update, do not change configurations or push another signature file for several minutes. The update will take some time to process. If there are any VFT changes, it may lead to warning messages in floodlight, such as:
Sync job 2853: still waiting after 50002 ms Stuck switch update: R740-25G[00:00:e4:43:4b:bb:38:ca], duration=50002ms, stage=COMMIT
These messages may also appear when configuring DPI on a large number of ports.
Limitations
- When using a drop filter, a few packets may slip through the filter before determining an application ID for a flow, and when using a forward filter, a few packets may not be forwarded. Such a small amount is estimated to be between 1 and 6 packets at the beginning of a flow.
- When using a drop filter, add the
unknown
app ID to the filter list to drop any unidentified traffic if these packets are unwanted. - The Controller must be connected to a Service Node for the
app-id-filter
app list to appear. If the list does not appear and the application names are unknown, use theapp-id
to send reports to the analytics node. Use the application names seen there to configure anapp-id-filter
. The name must match exactly. - Since
app-category
does not currently show the applications included in that category, do not use it when targeting specific apps. Categories like basic, which include all basic networking protocols like TCP and UDP, may affect all flows. - For
app-id
, a report is only generated for a fully classified flow after that flow has been fully classified. Therefore, the number of reported applications may not match the total number of flows. These reports are sent after enough applications are identified on the Service Node. If many applications are identified, DMF sends the reports quickly. However, DMF sends these reports every 10 seconds when identifying only a few applications. - DMF treats a bidirectional flow as part of the same n-tuple. As such, generated reports contain the client's source IP address and the server's destination IP address.
- While configuring many ports with the
app-id
, there may occasionally be a few Rx drops on the 16 port machines at a high traffic rate in the first couple of seconds. - The feature uses a cache that maps dest ip and port to the application. Caching may vary the performance depending on the traffic profile.
- The
app-id
andapp-id-filter
services are more resource-intensive than other services. Combining them in a service chain or configuring many instances of them may lead to degradation in performance. - At scale, such as configuring 16 ports on the R740 DCA-DM-SEL,
app-id
may take a few minutes to set up on all these ports, and this is also true when doing a dynamic signature update. - The
show analytics app-info
command only works inpush-per-filter
VLAN mode. -
Mapping an interface-name with identified application IDs from source traffic received via the filter interface at Analytics Dashboard is allowed in Push-Per-Filter Mode only.
Redundancy of Managed Services Using Two DMF Policies
In this method, users can employ a second policy with a second managed service to provide redundancy. The idea here is to duplicate the policies but assign a lower policy priority to the second DANZ Monitoring Fabric (DMF) policy. In this case, the backup policy (and, by extension, the backup service) will always be active but only receive relevant traffic once the primary policy goes down. This method provides true redundancy at the policy, service-node, and core switch levels but uses additional network and node resources.
! managed-service
managed-service MS-SLICE-1
1 slice l3-header-start 20
service-interface switch CORE-SWITCH-1 lag1
!
managed-service MS-SLICE-2
1 slice l3-header-start 20
service-interface switch CORE-SWITCH-1 lag2
! policy
policy ACTIVE-POLICY
priority 101
action forward
delivery-interface TOOL-PORT-1
filter-interface TAP-PORT-1
use-managed-service MS-SLICE-1 sequence 1
1 match ip
!
policy BACKUP-POLICY
priority 100
action forward
delivery-interface TOOL-PORT-1
filter-interface TAP-PORT-1
use-managed-service MS-SLICE-2 sequence 1
1 match ip
Sharing Managed Services Across Policies
A DANZ Monitoring Fabric (DMF) Controller allows sharing of managed services utilizing L3 delivery interfaces (e.g., NetFlow, IPFIX, app ID, etc.) across multiple policies. Prior to the DMF 8.7.0 release, DMF did not support managed service sharing because the L3 delivery interface was an optional setting in a policy configuration. However, sharing is now supported because the managed service configuration must now specify the L3 delivery interface.
When multiple policies (overlapping or non-overlapping) share the same Managed Service, the system will ensure that post-service traffic from all these policies is forwarded to appropriate delivery interfaces using a dynamically created post-service policy.
This change applies to the following managed service actions using L3 deliveries on all platforms that support managed services:
Configuration using the CLI
There are no new configurations in the Sharing Managed Services Across Policies feature, but several configuration validations surrounding the managed service configuration within a policy have changed:
- Removed the validation that prevented adding an L3 header modifying managed service to multiple policies.
- A validation was added to enforce the order of shared managed services.
When sharing multiple managed services with at least one action that requires an L3 delivery interface, the only valid case is where non-UDP
replicate
L3 services are followed by UDP
replicate
:
- Invalid: any
non UDP replicate
→ anynon UDP replicate
. - Invalid:
UDP replicate
→ anynon UDP replicate
. - Invalid:
UDP replicate
→UDP replicate
. - Valid: any
non UDP replicate
→UDP replicate
.
There are two fundamental exceptions to the earlier examples:
- The
push-per-filter
mode doesn’t allow multiple header modifying services in a policy. Flow diff
is only supported inpush-per-filter
mode and cannot be chained withUDP replicate
.
The following is a sample configuration sharing a Netflow managed service across two policies.
Configure an L3 delivery interface:
dmf-controller> enable
dmf-controller# configure
dmf-controller(config)# switch sw1
dmf-controller(config-switch)# interface ethernet31
dmf-controller(config-switch-if)# role delivery interface-name l3-d1 ip-address 0.0.0.1 nexthop-ip 0.0.0.1 255.255.255.0 nexthop-arp-interval 5
dmf-controller(config-switch-if)# rate-limit 1000
Create two managed services, one that is non-shareable and one with netflow
, which can be shared:
Non-shared Managed Service
dmf-controller> enable
dmf-controller# configure
dmf-controller(config)# managed-service ms-non-shareable
dmf-controller(config-managed-srv)# service-interface switch sw1 ethernet2
dmf-controller(config-managed-srv)# 1 sample
dmf-controller(config-managed-srv-sample)# max-tokens 100
dmf-controller(config-managed-srv-sample)# tokens-per-refresh 10
Shared Managed Service
dmf-controller> enable
dmf-controller# configure
dmf-controller(config)# managed-service ms-netflow
dmf-controller(config-managed-srv)# service-interface switch sw1 ethernet1
dmf-controller(config-managed-srv)# 1 netflow
dmf-controller(config-managed-srv-netflow)# collector 0.0.0.2 udp-port 2055 mtu 1500
dmf-controller(config-managed-srv-netflow)# l3-delivery-interface l3-d1
Configure two policies, where the first policy, p1
, has both ms-non-shareable
and ms-netflow
in order, and the second policy, p2
, only has ms-netflow
:
Policy 1
dmf-controller> enable
dmf-controller# configure
dmf-controller(config)# managed-service policy p1
dmf-controller(config-policy)# action forward
dmf-controller(config-policy)# 1 match any
dmf-controller(config-policy)# filter-interface f1
dmf-controller(config-policy)# use-managed-service ms-non-shareable sequence 1
dmf-controller(config-policy)# use-managed-service ms-netflow sequence 2
Policy 2
dmf-controller> enable
dmf-controller# configure
dmf-controller(config)# managed-service policy p2
dmf-controller(config-policy)# action forward
dmf-controller(config-policy)# 1 match any
dmf-controller(config-policy)# filter-interface f2
dmf-controller(config-policy)# use-managed-service ms-netflow sequence 1
In push-per-policy mode
, both p1
and p2
are expected to be installed to send traffic to the ms-netflow
pre-service interface, and a dynamic post-service policy p1_p2__post_to_delivery__
is created to carry the traffic from the post-service interface of ms-netflow
to l3-d1
.
In push-per-filter
mode, a post service policy is created regardless of whether a managed service is shared or not. The result is two post service policies, p1__post_to_delivery__
and p2__post_to_delivery__
, which will carry traffic from their respective configured policies to the delivery interface via the configured services.
The following section shows the runtime state of these policies.
Show Commands
While no new show
commands accompany the Sharing Managed Services Across Policies feature, use the following existing commands to verify the configuration.
Use the show running-config switch sw1 interface l3
interface
command to view the configured L3 interface:
dmf-controller> show running-config switch sw1 interface ethernet31
! switch
switch sw1
!
interface ethernet31
force-link-up
rate-limit 1000
role delivery interface-name l3-d1 ip-address 0.0.0.1 nexthop-ip 0.0.0.2 255.255.255.0 nexthop-arp-interval 5
Use the show running-config managed-service
command to view the managed service running config:
dmf-controller> show running-config managed-service
! managed-service
managed-service ms-netflow
service-interface switch sw1 ethernet1
!
1 netflow
collector 10.0.0.2 udp-port 2055 mtu 1500
l3-delivery-interface l3-d1
managed-service ms-non-shareable
service-interface switch sw1 ethernet2
!
1 sample
max-tokens 100
tokens-per-refresh 10
Use the show running-config policy
to view the policy running config:
dmf-controller> show running-config policy
! policy
policy p1
action forward
filter-interface f1
use-managed-service ms-netflow sequence 2
use-managed-service ms-non-shareable sequence 1
1 match any
policy p2
action forward
filter-interface f2
use-managed-service ms-netflow sequence 1
1 match any
Use the show switch sw1 interface l3 interface
command to view the runtime status of the interface:
dmf-controller> show switch sw1 interface ethernet31
# IF NameMAC AddressConfig State Adv. Features Curr Features Supported Features
-|----------|--------------------------|------|-----|-------------|-------------|------------------|
1 ethernet31 5c:16:c7:14:46:bb (Arista) up up10g 10g 10g
Use the show managed-service
command to view the runtime status of the managed services:
dmf-controller># show managed-service
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Managed-service ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Service Name Switch Switch Interface Installed Max Post-Service BW Max Pre-Service BW Total Post-Service BW Total Pre-Service BW
-|----------------|------|----------------|---------|-------------------|------------------|---------------------|--------------------|
1 ms-netflow sw1ethernet1True10Gbps10Gbps 974bps66bps
2 ms-non-shareable sw1ethernet2True10Gbps10Gbps 960bps66bps
Push-per-policy Mode
The show policy
command displays a brief version of runtime state of all policies, including those created dynamically:
dmf-controller> show policy
# Policy Name ActionRuntime Status Type Priority Overlap Priority Push VLAN Filter BW Delivery BW Post Match Filter Traffic Delivery Traffic Services Installed TimeInstalled DurationPtp Timestamping
-|-------------------------|-------|--------------|----------|--------|----------------|---------|---------|-----------|-------------------------|----------------|----------------|-----------------------|-------------------|----------------|
1 p1forward installedConfigured 10001 10Gbps10Gbps- -ms-non-shareable 2025-02-20 18:18:33 UTC 20 minutes, 46 secs False
2 p2forward installedConfigured 10002 10Gbps10Gbps- - 2025-02-20 18:18:33 UTC 20 minutes, 46 secs False
3 p1_p2__post_to_delivery__ forward installedDynamic10003 10Gbps10Gbps- - 2025-02-20 18:18:33 UTC 20 minutes, 46 secs False
The show policy policy name
command provides a more detailed view of a policy at runtime.
Since a new post-service policy is required for shared services in push-per-policy
mode (in the push-per-filter
mode, a post-service policy, shared or not, is always created), the configured policies are adjusted accordingly. So, p1
and p2
will neither have a ms-netflow
service nor will l3-d1
be the delivery interface.
p1
carries traffic from f1
to ms-non-shareable
and then delivers it to ms-netflow
.
dmf-controller> show policy p1
Policy Name: p1
Config Status: active - forward
Runtime Status : installed
Detailed Status: installed - installed to forward
Priority : 100
Overlap Priority : 0
# of switches with filter interfaces : 1
# of switches with delivery interfaces : 1
# of switches with service interfaces: 1
# of filter interfaces : 1
# of delivery interfaces : 1
# of core interfaces : 0
# of services: 1
# of pre service interfaces: 1
# of post service interfaces : 1
Push VLAN: 1
Post Match Filter Traffic: -
Total Delivery Rate: -
Total Pre Service Rate : -
Total Post Service Rate: -
Overlapping Policies : p1_p2__post_to_delivery__,
Component Policies : none
Runtime Service Names: ms-non-shareable
Installed Time : 2025-02-20 18:18:33 UTC
Installed Duration : 23 minutes, 18 secs
Timestamping enabled : False
~ Match Rules ~
# Rule
-|-----------|
1 1 match any
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Filter Interface(s)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# DMF IF Switch IF NameState Dir Packets Bytes Pkt Rate Bit Rate Counter Reset Time
-|------|------|----------|-----|---|-------|-----|--------|--------|------------------|
1 f1 sw1ethernet11 uprx0 0 0-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Delivery Interface(s) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# DMF IF Switch IF Name State Dir Packets Bytes Pkt Rate Bit Rate Counter Reset Time
-|----------------------------------|------|---------|-----|---|-------|-----|--------|--------|------------------|
1 sw1-ethernet1-to-managed-service sw1ethernet1 uptx2 140 0-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Service Interface(s) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Service name Role Switch IF Name State Dir Packets Bytes Pkt Rate Bit Rate Counter Reset Time
-|----------------|------------|------|---------|-----|---|-------|-----|--------|--------|------------------|
1 ms-non-shareable pre-servicesw1ethernet2 uptx0 0 0-
2 ms-non-shareable post-service sw1ethernet2 uprx2 140 0-
~ Core Interface(s) ~
None.
~ Failed Path(s) ~
None.
p2
carries traffic from f2
to ms-netflow
.
dmf-controller> show policy p2
Policy Name: p2
Config Status: active - forward
Runtime Status : installed
Detailed Status: installed - installed to forward
Priority : 100
Overlap Priority : 0
# of switches with filter interfaces : 1
# of switches with delivery interfaces : 1
# of switches with service interfaces: 0
# of filter interfaces : 1
# of delivery interfaces : 1
# of core interfaces : 0
# of services: 0
# of pre service interfaces: 0
# of post service interfaces : 0
Push VLAN: 2
Post Match Filter Traffic: -
Total Delivery Rate: -
Total Pre Service Rate : -
Total Post Service Rate: -
Overlapping Policies : p1_p2__post_to_delivery__,
Component Policies : none
Installed Time : 2025-02-20 18:18:33 UTC
Installed Duration : 23 minutes, 21 secs
Timestamping enabled : False
~ Match Rules ~
# Rule
-|-----------|
1 1 match any
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Filter Interface(s)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# DMF IF Switch IF NameState Dir Packets Bytes Pkt Rate Bit Rate Counter Reset Time
-|------|------|----------|-----|---|-------|-----|--------|--------|------------------|
1 f2 sw1ethernet12 uprx0 0 0-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Delivery Interface(s) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# DMF IF Switch IF Name State Dir Packets Bytes Pkt Rate Bit Rate Counter Reset Time
-|----------------------------------|------|---------|-----|---|-------|-----|--------|--------|------------------|
1 sw1-ethernet1-to-managed-service sw1ethernet1 uptx0 0 0-
~ Service Interface(s) ~
None.
~ Core Interface(s) ~
None.
~ Failed Path(s) ~
None.
p1_p2__post_to_delivery__
receives traffic from ms-netflow
, and then delivers it to l3-d1
.
dmf-controller> show policy p1_p2__post_to_delivery__
Policy Name: p1_p2__post_to_delivery__
Config Status: active - forward
Runtime Status : installed
Detailed Status: installed - installed to forward
Priority : 100
Overlap Priority : 0
# of switches with filter interfaces : 1
# of switches with delivery interfaces : 1
# of switches with service interfaces: 0
# of filter interfaces : 1
# of delivery interfaces : 1
# of core interfaces : 0
# of services: 0
# of pre service interfaces: 0
# of post service interfaces : 0
Push VLAN: 3
Post Match Filter Traffic: -
Total Delivery Rate: -
Total Pre Service Rate : -
Total Post Service Rate: -
Overlapping Policies : none
Component Policies : p1, p2,
Installed Time : 2025-02-20 18:18:33 UTC
Installed Duration : 23 minutes, 28 secs
Timestamping enabled : False
~ Match Rules ~
None.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Filter Interface(s) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# DMF IF Switch IF Name State Dir Packets Bytes Pkt Rate Bit Rate Counter Reset Time
-|----------------------------------|------|---------|-----|---|-------|-----|--------|--------|------------------|
1 sw1-ethernet1-to-managed-service sw1ethernet1 uprx1 700-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Delivery Interface(s)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# DMF IF Switch IF NameState Dir Packets Bytes Pkt Rate Bit Rate Counter Reset Time
-|------|------|----------|-----|---|-------|-----|--------|--------|------------------|
1 l3-d1sw1ethernet31 uptx1 700-
~ Service Interface(s) ~
None.
~ Core Interface(s) ~
None.
~ Failed Path(s) ~
None.
Another way of visualizing this runtime state is illustrated in the following diagram:

Push-per-filter Mode
The show
policy
command displays a brief version of the runtime state of all policies, including any created dynamically:
dmf-controller> show policy
# Policy NameActionRuntime StatusType Priority Overlap Priority Push VLAN Filter BW Delivery BW Post Match Filter Traffic Delivery Traffic Services Installed TimeInstalled DurationPtp Timestamping
-|----------------------|-------|---------------------------------|----------|--------|----------------|---------|---------|-----------|-------------------------|----------------|----------|-----------------------|-------------------|----------------|
1 p1 forward installed Configured 10000 10Gbps10Gbps- - 2025-02-20 19:13:39 UTC 15 minutes, 29 secs False
2 p2 forward installed Configured 10000 10Gbps10Gbps- - 2025-02-20 19:13:39 UTC 15 minutes, 29 secs False
3 p1__post_to_delivery__ forward installed Dynamic10000 10Gbps10Gbps- -ms-netflow 2025-02-20 19:13:39 UTC 15 minutes, 29 secsFalse
4 p2__post_to_delivery__ forward installed Dynamic10000 10Gbps10Gbps- - 2025-02-20 19:13:39 UTC 15 minutes, 29 secs False
The show policy policy name
command provides a more detailed view of a policy at runtime:
p1
carries traffic from f1
to ms-non-shareable
service.
dmf-controller> show policy p1
Policy Name: p1
Config Status: active - forward
Runtime Status : installed
Detailed Status: installed - installed to forward
Priority : 100
Overlap Priority : 0
# of switches with filter interfaces : 1
# of switches with delivery interfaces : 1
# of switches with service interfaces: 0
# of filter interfaces : 1
# of delivery interfaces : 1
# of core interfaces : 0
# of services: 0
# of pre service interfaces: 0
# of post service interfaces : 0
Push VLAN: 0
Post Match Filter Traffic: -
Total Delivery Rate: -
Total Pre Service Rate : -
Total Post Service Rate: -
Overlapping Policies : p1__post_to_delivery__,
Component Policies : none
Installed Time : 2025-02-20 19:13:39 UTC
Installed Duration : 17 minutes, 55 secs
Timestamping enabled : False
~ Match Rules ~
# Rule
-|-----------|
1 1 match any
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Filter Interface(s)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# DMF IF Switch IF NameState Dir Packets Bytes Pkt Rate Bit Rate Counter Reset Time
-|------|------|----------|-----|---|-------|-----|--------|--------|------------------|
1 f1 sw1ethernet11 uprx0 0 0-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Delivery Interface(s) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# DMF IF Switch IF Name State Dir Packets Bytes Pkt Rate Bit Rate Counter Reset Time
-|----------------------------------|------|---------|-----|---|-------|-----|--------|--------|------------------|
1 sw1-ethernet2-to-managed-service sw1ethernet2 uptx0 0 0-
~ Service Interface(s) ~
None.
~ Core Interface(s) ~
None.
~ Failed Path(s) ~
None.
p2
carries traffic from f1
to ms-netflow
service.
dmf-controller> show policy p2
Policy Name: p2
Config Status: active - forward
Runtime Status : installed
Detailed Status: installed - installed to forward
Priority : 100
Overlap Priority : 0
# of switches with filter interfaces : 1
# of switches with delivery interfaces : 1
# of switches with service interfaces: 0
# of filter interfaces : 1
# of delivery interfaces : 1
# of core interfaces : 0
# of services: 0
# of pre service interfaces: 0
# of post service interfaces : 0
Push VLAN: 0
Post Match Filter Traffic: -
Total Delivery Rate: -
Total Pre Service Rate : -
Total Post Service Rate: -
Overlapping Policies : p2__post_to_delivery__,
Component Policies : none
Installed Time : 2025-02-20 19:13:39 UTC
Installed Duration : 18 minutes, 44 secs
Timestamping enabled : False
~ Match Rules ~
# Rule
-|-----------|
1 1 match any
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Filter Interface(s)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# DMF IF Switch IF NameState Dir Packets Bytes Pkt Rate Bit Rate Counter Reset Time
-|------|------|----------|-----|---|-------|-----|--------|--------|------------------|
1 f2 sw1ethernet12 uprx0 0 0-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Delivery Interface(s) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# DMF IF Switch IF Name State Dir Packets Bytes Pkt Rate Bit Rate Counter Reset Time
-|----------------------------------|------|---------|-----|---|-------|-----|--------|--------|------------------|
1 sw1-ethernet1-to-managed-service sw1ethernet1 uptx0 0 0-
~ Service Interface(s) ~
None.
~ Core Interface(s) ~
None.
~ Failed Path(s) ~
None.
p1__post_to_delivery__
carries traffic from ms-non-shareable
to ms-netflow
and then to l3-d1
.
dmf-controller> show policy p1__post_to_delivery__
Policy Name: p1__post_to_delivery__
Config Status: active - forward
Runtime Status : installed
Detailed Status: installed - installed to forward
Priority : 100
Overlap Priority : 0
# of switches with filter interfaces : 1
# of switches with delivery interfaces : 1
# of switches with service interfaces: 1
# of filter interfaces : 1
# of delivery interfaces : 1
# of core interfaces : 0
# of services: 1
# of pre service interfaces: 1
# of post service interfaces : 1
Push VLAN: 0
Post Match Filter Traffic: -
Total Delivery Rate: -
Total Pre Service Rate : -
Total Post Service Rate: -
Overlapping Policies : none
Component Policies : p1,
Runtime Service Names: none
Timestamping enabled : False
~ Match Rules ~
None.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Filter Interface(s) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# DMF IF Switch IF Name State Dir Packets Bytes Pkt Rate Bit Rate Counter Reset Time
-|----------------------------------|------|---------|-----|---|-------|-----|--------|--------|------------------|
1 sw1-ethernet2-to-managed-service sw1ethernet2 uprx0 0 0-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Delivery Interface(s)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# DMF IF Switch IF NameState Dir Packets Bytes Pkt Rate Bit Rate Counter Reset Time
-|------|------|----------|-----|---|-------|-----|--------|--------|------------------|
1 l3-d1sw1ethernet31 uptx0 0 0-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Service Interface(s) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Service name Role Switch IF Name State Dir Packets Bytes Pkt Rate Bit Rate Counter Reset Time
-|------------|------------|------|---------|-----|---|-------|-----|--------|--------|------------------|
1 ms-netflow pre-servicesw1ethernet1 uptx0 0 0-
2 ms-netflow post-service sw1ethernet1 uprx0 0 0-
~ Core Interface(s) ~
None.
~ Failed Path(s) ~
None.
p2__post_to_delivery__
carries traffic from ms-netflow
to l3-d1
.
dmf-controller> show policy p2__post_to_delivery__
Policy Name: p2__post_to_delivery__
Config Status: active - forward
Runtime Status : installed
Detailed Status: installed - installed to forward
Priority : 100
Overlap Priority : 0
# of switches with filter interfaces : 1
# of switches with delivery interfaces : 1
# of switches with service interfaces: 0
# of filter interfaces : 1
# of delivery interfaces : 1
# of core interfaces : 0
# of services: 0
# of pre service interfaces: 0
# of post service interfaces : 0
Push VLAN: 0
Post Match Filter Traffic: -
Total Delivery Rate: -
Total Pre Service Rate : -
Total Post Service Rate: -
Overlapping Policies : none
Component Policies : p2,
Installed Time : 2025-02-20 19:13:39 UTC
Installed Duration : 22 minutes, 19 secs
Timestamping enabled : False
~ Match Rules ~
None.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Filter Interface(s) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# DMF IF Switch IF Name State Dir Packets Bytes Pkt Rate Bit Rate Counter Reset Time
-|----------------------------------|------|---------|-----|---|-------|-----|--------|--------|------------------|
1 sw1-ethernet1-to-managed-service sw1ethernet1 uprx0 0 0-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Delivery Interface(s)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# DMF IF Switch IF NameState Dir Packets Bytes Pkt Rate Bit Rate Counter Reset Time
-|------|------|----------|-----|---|-------|-----|--------|--------|------------------|
1 l3-d1sw1ethernet31 uptx0 0 0-
~ Service Interface(s) ~
None.
~ Core Interface(s) ~
None.
~ Failed Path(s) ~
None.
Another way of visualizing this runtime state is illustrated in the following diagram:

Troubleshooting
If any failures occur by the sharing of L3 managed services between policies, ensure the order of services shared by all policies is the same, i.e., the order of services matters and partially sharing the sequence is not allowed:
- Policy
p1
cannot havems1
→ms2
, while policyp2
already hasms2
→ms1
. - Policy
p1
cannot havems1
→ms2
, while policyp2
only hasms1
.
Limitations
- When creating a post-service policy, configured policies are modified at runtime. So, the runtime state of a configured policy will show the modified delivery interfaces and services. You must use configured and dynamic policies to visualize the complete path.
- Not all managed service actions can be shared.
- Shared managed services must be the last in the sequence, and in the same order for all policies using it. Partial sharing of a sub-sequence of services is not allowed. Policy
p1
cannot havems1
→ms2
, while policyp2
only hasms1
.
Cloud Services Filtering
The DANZ Monitoring Fabric (DMF) supports traffic filtering to specific services hosted in the public cloud and redirecting filtered traffic to customer tools. DMF achieves this functionality by reading the source and destination IP addresses of specific flows, identifying the Autonomous System number they belong to, tagging the flows with their respective AS numbers, and redirecting them to customer tools for consumption.
The following is the list of services supported:
- amazon: traffic with src/dst IP belonging to Amazon
- ebay: traffic with src/dst IP belonging to eBay
- facebook: traffic with src/dst IP belonging to FaceBook
- google: traffic with src/dst IP belonging to Google
- microsoft: traffic with src/dst IP belonging to Microsoft
- netflix: traffic with src/dst IP belonging to Netflix
- office365: traffic for Microsoft Office365
- sharepoint: traffic for Microsoft Sharepoint
- skype: traffic for Microsoft Skype
- twitter: traffic with src/dst IP belonging to Twitter
- default: traffic not matching other rules in this service. Supported types are match or drop.
The option drop instructs the DMF Service Node to drop packets matching the configured application.
The option match instructs the DMF Service Node to deliver packets to the delivery interfaces connected to the customer tool.
match default
. It instructs the DMF Service Node to drop packets when either of the following conditions occurs:
- The stream's source IP address or destination IP address doesn't belong to any AS number.
- The stream's source IP address or destination IP address is affiliated with an AS number but has no specific action set.
Cloud Services Filtering Configuration
Controller(config)# managed-service name
Controller(config-managed-srv)#
Controller(config-managed-srv)# 1 app-filter
Controller(config-managed-srv-appfilter)#
Controller(config-managed-srv-appfilter)# 1 drop sharepoint
Controller(config-managed-srv-appfilter)# 2 match google
Controller(config-managed-srv-appfilter)# show this
! managed-service
managed-service sf3
service-interface switch CORE-SWITCH-1 ethernet13/1
!
1 service- app-filter
1 drop sharepoint
2 match google
Controller(config)# show running-config managed-service incomplete-managed-service
! managed-service
managed-service incomplete-managed-service
1 app-filter
Controller(config)# show running-config policy R730-sf3
! policy
policy incomplete-policy
action forward
delivery-interface TOOL-PORT-1
filter-interface TAP-PORT-1
use-managed-service incomplete-managed-service sequence 1
1 match any
Controller(config-managed-srv-appfilter)# show policy incomplete-policy
Policy Name : incomplete-policy
Config Status : active - forward
Runtime Status : one or more required service down
Detailed Status : one or more required service down - installed to
forward
Priority : 100
Overlap Priority : 0
Multiple Services Per Service Node Interface
The service-node capability is augmented to support more than one service action per service-node interface. Though this feature is economical regarding per-interface cost, it could cause packet drops in high-volume traffic environments. Arista Networks recommends using this feature judiciously.
controller-1# show running-config managed-service Test
! managed-service
managed-service Test
service-interface switch CORE-SWITCH-1 ethernet13/1
1 dedup full-packet window 2
2 mask BIGSWITCH
3 slice l4-payload-start 0
!
4 netflow an-collector
collector 10.106.6.15 udp-port 2055 mtu 1500


- The NetFlow/IPFIX-action configuration should not be followed by the timestamp service action.
- Ensure the UDP-replication action configuration is the last service in the sequence.
- The header-stripping service with post-service-match rule configured should not be followed by the NetFlow, IPFIX, udp-replication, timestamp and TCP-analysis services.
- When configuring a header strip and slice action, the header strip action must precede the slice action.
Sample Service
The Service Node forwards packets based on the max-tokens
and tokens-per-refresh
parameters using the DANZ Monitoring Fabric (DMF) Sample Service feature. The sample service uses one token to forward one packet.
After consuming all the initial tokens from the max-tokens
bucket, the system drops subsequent packets until the max-tokens
bucket refills using the tokens-per-refresh
counter at a recurring predefined time interval of 10ms. Packet sizes do not affect this service.
Arista Networks recommends keeping the tokens-per-refresh
value at or below max-tokens
. For example, max-tokens =
1000
and tokens-per-refresh = 500
.
Setting the max-tokens
value to 1000 means that the initial number of tokens is 1000, and the maximum number of tokens stored at any time is 1000.
The max-tokens
bucket will be zero when the Service Node has forwarded 1000 packets before the first 10 ms period ends, leading to a situation where the Service Node is no longer forwarding packets. After every 10ms time interval, if the tokens-per-refresh
value is set to 500, the max-tokens
bucket is refilled using the tokens-per-refresh
configured value, 500 tokens in this case, to pass packets the service tries to use immediately.
Suppose the traffic rate is higher than the refresh amount added. In that case, available tokens will eventually drop back to 0, and every 10ms, only 500 packets will be forwarded, with subsequent packets being dropped.
If the traffic rate is lower than the refresh amount added, a surplus of tokens will result in all packets passing. Since the system only consumes some of the tokens before the next refresh interval, available tokens will accumulate until they reach the max-tokens
value of 1000. After 1000, the system does not store any surplus tokens above the max-tokens
value.
To estimate the maximum possible packets passed per second (pps), use the calculation (1000ms/10ms) * tokens-per-refresh
and assume the max-tokens
value is larger than tokens-per-refresh
. For example, if the tokens-per-refresh
value is 5000
, then 500000
pps are passed.
The Sample Service feature can be used as a standalone Managed Service or chained with other Managed Services.
Use Cases and Compatibility
- Applies to Service Nodes
- Limit traffic to tools that cannot handle a large amount of traffic.
- Use the Sample Service before another managed service to decrease the load on that service.
- The Sample Service is applicable when needing only a portion of the total packets without specifically choosing which packets to forward.
Sample Service CLI Configuration
Show Commands
Use the show running-config managed-service
sample_service_name
command to view pertinent details. In this example, the sample_service_name
is techpubs.
DMF-SCALE-R450> show running-config managed-service techpubs
! managed-service
managed-service techpubs
!
1 sample
max-tokens 1000
tokens-per-refresh 500
DMF-SCALE-R450>
Sample Service GUI Configuration
Troubleshooting Sample Service
Troubleshooting
- If the number of packets forwarded by the Service Node interfaces is few, the
max-tokens
andtokens-per-refresh
values likely need to be higher. - If fewer packets than the
tokens-per-refresh
value forward, ensure themax-tokens
value is larger than thetokens-per-refresh
value. The system discards any surplus refresh tokens above themax-tokens
value. - When all traffic forwards, the initial
max-tokens
value is too large, or the tokens refreshed bytokens-per-refresh
are higher than the packet rate. - When experiencing packet drops after the first 10ms post commencement of traffic, it may be due to a low
tokens-per-refresh
value. For example, calculate the minimum value ofmax-tokens
andtokens-per-refresh
that would lead to forwarding all packets.
Calculation Example
Traffic Rate : 400 Mbps
Packet Size - 64 bytes
400 Mbps = 400000000 bps
400000000 bps = 50000000 Bps
50000000 Bps = 595238 pps (Includes 20 bytes of inter packet gap in addition to the 64 bytes)
1000 ms = 595238 pps
1 ms = 595.238 pps
10 ms = 5952 pps
max-tokens : 5952 (the minimum value)
tokens-per-refresh : 5952 ( the minimum value)
Limitations
- In the current implementation, the Service Sample action is bursty. The token consumption rate is not configured to withhold tokens over time, so a large burst of incoming packets can immediately consume all the tokens in the bucket. There is currently no way to select what traffic is forwarded or dropped; it only depends on when the packets arrive concerning the refresh interval.
- Setting the
max-tokens
andtokens-per-refresh
values too high will forward all packets. The maximum value is9,223,372,036,854,775,807
, but Arista Networks recommends staying within the maximum values stated under the description section.
Flow Diff Latency and Drop Analysis
Latency and drop information help determine if there is a loss in a particular flow and where the loss occurred. A Service Node action configured as a DANZ Monitoring Fabric (DMF) managed service has multiple separate taps or spans in the production network and can measure the latency of a flow traversing through any pair of these points. It can also detect packet drops between any two points in the network if the packet only appears on one point within a specified time frame, currently set to 200ms.
Latency and drop analysis require Precision Time Protocol (PTP) time-stamped packets. The DMF PTP timestamping feature can do this as the packets enter the monitoring fabric, or the production network switches can also timestamp the packet.
The Service Node accumulates latency values by flow and sends IPFIX data records with each flow's 5-tuple and ingress and egress identifiers. It sends IPFIX data records to the Analytics Node after collecting a specified number of values for a flow or when a timeout occurs for the flow entry. The threshold count is 10,000 packets, and the flow timeout is 4 seconds.
push-per-filter
mode. Only basic statistics, such as min
, max
, and mean
, are available. These statistics are the computed difference in timestamps, or latency, between two tap point pairs of packets within a flow.Use the DMF Analytics Node to build custom dashboards to view and check the data.
Configure Flow Diff Latency and Drop Analysis Using the CLI
flow-diff
.
Latency configuration configures multiple tap point pairs. DMF analyzes traffic flowing between these tap point pairs for latency or drops. A tap point pair comprises a source and a destination tap point, identified by the filter interface or filter interface group. Based on the latency configuration, configuring the Service Node with traffic metadata tells the Service Node where to look for tap point information, timestamps, and the IPFIX collector.
Configure appropriate DMF Policies to deliver traffic tapped from tap point pairs in the network to the configured Service Node interface for analysis.
Configuration Steps for flow-diff
.
flow-diff
using the steps mentioned earlier:
dmf-controller-1(config)# managed-service managed_service_1
dmf-controller-1(config-managed-srv)# service-interface switch delivery1 ethernet1
dmf-controller-1(config-managed-srv)# 1 flow-diff
dmf-controller-1(config-managed-srv-flow-diff)# collector 192.168.1.1
dmf-controller-1(config-managed-srv-flow-diff)# l3-delivery-interface l3-iface-1
dmf-controller-1(config-managed-srv-flow-diff)# tap-point-pair source filter-interface f1 destination filter-interface f2
dmf-controller-1(config-managed-srv-flow-diff)# latency-table-size small|medium|large
dmf-controller-1(config-managed-srv-flow-diff)# packet-timeout 100
dmf-controller-1(config-managed-srv-flow-diff)# flow-timeout 4000
dmf-controller-1(config-managed-srv-flow-diff)# sample-count-threshold 10000
Configuring Tap Points
tap-point-pair
parameters in the flow-diff
submode specifying two identifiers: filter interface name
, and filter-interface-group
.
dmf-controller-1(config-managed-srv-flow-diff)# tap-point-pair source <Tab>
filter-interface filter-interface-group
The filter-interface-group
option takes in any configured filter interface group used to represent a collection of tap points in push-per-filter
mode. This is an optional command to use when a group of tap points exists, all expecting traffic from the same source or group of source tap points for ease of configuration. For example:
- Instead of having two separate
tap-point-pairs
to represent A → B, A → C, use afilter-interface-group
G
= [B, C], and only one tap-point-pair A → G.dmf-controller-1(config-managed-srv-flow-diff# tap-point-pair source type A destination filter-interface-group G
- With a topology like A → C and B → C, configure a
filter-interface-group
G
= [A, B], andtap-point-pair
G → C.dmf-controller-1(config-managed-srv-flow-diff # tap-point-pair source filter-interface-group G destination type C
There are some restrictions to keep in mind while configuring tap-point-pairs
:
- A
source
anddestination
must exist and cannot refer to the same tap point - You can only configure a maximum of 1024 tap points, therefore a maximum of 512 tap-point-pairs. This includes all managed services with
flow-diff
action, system-wide. - The configured
filter-interface-group
must not overlap with other groups within the same managed service and cannot have more than 128 members. - When configuring multiple
tap-point-pair
s usingfilter-interface
andfilter-interface-group
, an interface part of a filter interface group cannot be used simultaneously as an individual source and destination within the same managed service.
If an interface group has more than one LAG grouping where each LAG contains multiple ids, only one member id of each LAG is required. Subsequent ids of the LAG will not have latency computed if they are found.
Configuring Policy
Configure the policies so that the same packet can be tapped from two independent points in the network and then sent to the Service Node.
After creating a policy, add the managed service with flow-diff
action as shown below:
dmf-controller-1 (config-policy)# use-managed-service service name sequence 1
There are several things to consider while configuring policies in push-per-filter
mode:
- Only one policy can contain the
flow-diff
service action. - A policy should have all
filter-interfaces
andfilter-interface-groups
configured as tap points in theflow-diff
configuration. Any missing filter interfaces and groups in a policy may result in reporting drops as these packets do not forward from one end of thetap-point-pair
to the Service Node. - It’s also advisable not to add any filter interfaces (groups) that are not in the
tap-point-pairs
as their latency and drop analysis will not be done, causing unnecessary packets to be forwarded to the Service Node, which are then reported as unexpected. - Ensure accurate timestamping and avoid unexpected report types by correctly ordering the source and destination for tap point pairs. Tap point pairs are not bidirectional so to compute latencies for both directions add the reverse ordering of the tap point pair.
dmf-controller-1 (config-policy)# use-timestamping
Configuring PTP Timestamping
This feature depends on configuring PTP timestamping for the packet stream going through the tap points. Refer to the Resources section for more information on setting up PTP timestamping functionality. Arista strongly recommends using replace-src-mac
for this feature until revised, as performance and SN compatibility may vary for add-header-after-l2
.
Analytics Node
IPFIX reports received on the analytics node can be of one of the following types, indicated by the reportType field:
- 0 is a valid latency report with the latency computed between the two tap point pairs.
- 1 means it is a drop report, which means the packet didn’t arrive at its destination. If the egress identifier field is also 0, the packet was only received at one tap point. If the egress identifier contains a nonzero value, that means one or a few of the LAG groupings associated with a destination interface group did not receive any packets associated with an id in that LAG. The egress identifier field is filled with the id of one of the members in that LAG group.
- 2 means unexpected, which can be caused by packets with identifiers not included in any tap point pairs or a misordering of timestamps where the destination comes before the source tap.
- 3 means overflow, where this same packet appears at too many tap points. Packets that surpass the maximum tap points are ignored.
VXLAN
When using VXLAN, the headers of these packets must be decapsulated before they can be processed by flow-diff
on the service node. The filter switch must perform the decapsulation. If encapsulated packets appear at one tap point and non-encapsulated packets at the other endpoint, the service requires both of these packets to be decapsulated. Arista recommends using the following process to ensure flow-diff
works correctly.
- Configure the filter interface receiving the VXLAN encapsulated traffic to perform decapsulation.
- If the receiving VXLAN traffic appears with non-default port number
4789
, configure the filter switch with a non-default port number as shown in the following example:filter-switch-1# configure filter-switch-1(config)# interface Vxlan 1 filter-switch-1(config-if-Vx1)# vxlan udp-port customer port interface Vxlan1 vxlan udp-port customer port
Show Commands
The following show
commands provide helpful information.
show running-config managed-service managed
service
command helps verify whether the flow-diff
configuration is complete.
dmf-controller-1(config)# show running-config managed-service flow-diff
! managed-service
managed-service flow-diff
service-interface switch DCS-7050CX3-32S ethernet2/4
!
1 flow-diff
collector 192.168.1.1
l3-delivery-interface AN-Data
tap-point-pair source filter-interface ingress destination filter-interface egress
show managed-services managed service
command provides status information about the service.
dmf-controller-1(config)# show managed-services flow-diff
# Service Name SwitchSwitch Interface Installed Max Post-Service BW Max Pre-Service BW Total Post-Service BW Total Pre-Service BW
-|------------|---------------|----------------|---------|-------------------|------------------|---------------------|--------------------|
1 flow-diffDCS-7050CX3-32S ethernet2/4True25Gbps25Gbps 624Kbps 432Mbps
show running-config policy policy
command checks whether the policy flow-diff
service exists, whether use-timestamping
is enabled, and the use of the correct filter interfaces.
dmf-controller-1(config)# show running-config policy p1
! policy
policy p1
action forward
filter-interface egress
filter-interface ingress
use-managed-service ms1 sequence 1
use-timestamping
1 match any
The show policy policy
command provides detailed information about a policy and whether any errors are related to the flow-diff
service. The Service Interfaces tab section shows the packets transmitted to the Service Node and IPFIX packets received from the Service Node.
dmf-controller-1 (config)# show policy flow-diff-1
Policy Name: flow-diff-1
Config Status: active - forward
Runtime Status : installed
Detailed Status: installed - installed to forward
Priority : 100
Overlap Priority : 0
# of switches with filter interfaces : 1
# of switches with delivery interfaces : 1
# of switches with service interfaces: 1
# of filter interfaces : 1
# of delivery interfaces : 1
# of core interfaces : 4
# of services: 1
# of pre service interfaces: 1
# of post service interfaces : 1
Push VLAN: 2
Post Match Filter Traffic: 215Mbps
Total Delivery Rate: -
Total Pre Service Rate : 217Mbps
Total Post Service Rate: -
Overlapping Policies : none
Component Policies : none
Runtime Service Names: flow-diff
Installed Time : 2023-11-16 18:15:27 PST
Installed Duration : 19 minutes, 45 secs
~ Match Rules ~
# Rule
-|-----------|
1 1 match any
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Filter Interface(s) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# DMF IF Switch IF NameState Dir PacketsBytes Pkt Rate Bit Rate Counter Reset Time
-|------|--------|----------|-----|---|--------|-----------|--------|--------|------------------------------|
1 BP17280SR3E Ethernet25 uprx24319476 27484991953 23313215Mbps2023-11-16 18:18:18.837000 PST
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Delivery Interface(s) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# DMF IFSwitchIF NameState Dir Packets BytesPkt Rate Bit Rate Counter Reset Time
-|-------|---------|----------|-----|---|-------|------|--------|--------|------------------------------|
1 AN-Data 7050SX3-1 ethernet41 uptx81117222 0-2023-11-16 18:18:18.837000 PST
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Service Interface(s) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Service name Role SwitchIF Name State Dir PacketsBytes Pkt Rate Bit Rate Counter Reset Time
-|------------|------------|---------------|-----------|-----|---|--------|-----------|--------|--------|------------------------------|
1 flow-diffpre-serviceDCS-7050CX3-32S ethernet2/4 uptx23950846 27175761734 23418217Mbps2023-11-16 18:18:18.837000 PST
2 flow-diffpost-service DCS-7050CX3-32S ethernet2/4 uprx81 1175460-2023-11-16 18:18:18.837000 PST
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Core Interface(s) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# SwitchIF NameState Dir PacketsBytes Pkt Rate Bit Rate Counter Reset Time
-|---------------|----------|-----|---|--------|-----------|--------|--------|------------------------------|
1 7050SX3-1 ethernet7uprx23950773 27175675524 23415217Mbps2023-11-16 18:18:18.837000 PST
2 7050SX3-1 ethernet56 uprx81 1172220-2023-11-16 18:18:18.837000 PST
3 7050SX3-1 ethernet56 uptx23950773 27175675524 23415217Mbps2023-11-16 18:18:18.837000 PST
4 7280SR3EEthernet7uptx24319476 27484991953 23313215Mbps2023-11-16 18:18:18.837000 PST
5 DCS-7050CX3-32S ethernet28 uptx81 1175460-2023-11-16 18:18:18.837000 PST
6 DCS-7050CX3-32S ethernet28 uprx23950846 27175761734 23418217Mbps2023-11-16 18:18:18.837000 PST
~ Failed Path(s) ~
None.
Syslog Messages
The Flow Diff Latency and Drop Analysis feature does not create Syslog messages.
Troubleshooting
DMF Controller
Policies dictate how and what packets are directed to the Service Node. Policies must be able to stream packets from two distinct tap points so that the same packet gets delivered to the Service Node for flow-diff
and drop
analysis.
Possible reasons for flow-diff and drop analysis not working are:
- In
push-per-filter
mode, the policy bound to managed service with flow-diff action is missing filter interfaces or groups that constitute atap-point-pair
.
A policy programmed to use managed service with flow-diff
action can fail for several reasons:
- The L3 delivery interface or Collector configuration is missing.
- The
tap-point-pair
configuration is incomplete:- The
source
ordestination
tappoints
are missing. - Using a
policy-name
identifier inpush-per-filter
mode.
- The
- There are more than 512
tap-point-pairs
configured or more than 1024 distinct tap points. These limits are global across all flow-diff managed services. filter-interface-groups
overlap with each other within the same managed service or have more than 128 group members.filter-interface
is being used individually as a tap point and as a part of somefilter-interface-group
within the same managed service.
Reasons for failure are available in the runtime state of the policy and viewed using the show policy policy name
command.
A lack of computed latency reports can mean two things:
- Not reaching the
sample-count-threshold
value. Either lower thesample-count-threshold
value until reports are generated or increase the amount of unique packets per flow. - Flows may not be evicted. The time specified in
flow-timeout
will refresh every time a new packet is received on that flow before the timeout occurs. If a flow continuously receives packets, it will never time out. Lower theflow-timeout
value if the default of4
seconds causes flows not to expire. Take this action if the sample threshold counter is taking a long time due to low traffic volume for that flow. Adjust thesample-count-threshold
value before attemptingflow-timeout
changes. Once a flow expires, the system generates a report for that flow.
packet-timeout
, this value must be larger than the time expected to receive the same packet on every tap-point. For A->B, if it takes 20ms for the packet to appear at B, the packet-timeout
must be larger than this time frame to collect timestamps for both these tap points and compute a latency in one sample.
packet-timeout
value may need to be increased.If all the reports are type 2 unexpected, ensure the taps are in the correct order as configured on the Controller. Also, check the topology is correct and that there are no unexpected taps. If there are only a few unexpected reports, it is most likely an issue with the switch ordering the timestamps. You can ignore these or add the reverse direction onto the topology to turn these into type 0 reports. If there are many drop reports, confirm that the same packet is received at all relevant taps within the packet-timeout window, which defaults to 200ms.
Limitations
- This feature is only supported in
push-per-filter
mode. Any config inpush-per-policy
mode will not work as intended. - A maximum of
512
tap-point-pairs
and1024
distinct tap points are allowed. - Only 40 timestamped instances of a packet are allowed.
- The
filter-interface-group
used as a tap point must not overlap with any other group within the same managed service and must not have more than 128 members. - A filter interface cannot be used individually as a tap point and as a part of a
filter-interface-group
simultaneously within the same managed service. - There is no chaining if a packet flows through three or more tap points in an A->B->C->...->Z topology. The only computed latency reports is for
tap-point-pairs
A->B, A->C, …, and A->Z if these links are specified, but B->C, C->D, etc., will not be computed. - Service node interfaces can have multiple lcores depending on the SKU. An lcore is a logical core which processes traffic on an interface. Hardware RSS firmware in some Service Node SKU currently cannot parse L2 header timestamps, so all packets are sent to the same lcore; however, RSS does distribute packets correctly to multiple lcores when using src-mac timestamping.
- Each packet from the L3 header and onwards gets hashed to a 64-bit value; if two packets hash to the same value, assume the underlying packets are the same.
- Currently, on the
flow-diff
action in the Service Node, if packets are duplicated so that N copies of the same packet are received:- N-1 latencies are computed.
- The ingress identifier is the earliest timestamp.
- The system reports timestamps as unsigned 32-bit values, with the maximum timestamp being 2^32-1, corresponding to approximately 4.29 seconds.
- Only min/mean/max latencies are currently reported.
- If there are switch timestamping issues, then these statistics may have high outliers.
- Synchronize the time between switches for this feature to work properly, or latency calculation may be inaccurate.
- Packets are hashed from the L3 header onwards, meaning if there is any corrupted data past the L3 header, it will lead to drop reports. The same packet must appear at two tap points to generate a computed latency report.
- In A->B, if B packets appear before A, an unexpected type report is generated.
- At whichever tap point the packet first appears with the earliest timestamp, it is considered the source.
- While switching the latency configuration, the system may generate a couple of unexpected or drop reports at the beginning.
- A LAG may affect RSS on the service node which may affect performance.
- Users must have good knowledge of network topology when setting up tap points and configuring timeout or sample threshold values. Improper configuration may lead to drop or unexpected reports.
- Occasionally switch timestamping causes the system to generate a few type 2 unexpected reports.
- The system may generate drop reports from certain protocols, such as OSPF.
- The amount of packets per second and packet size influences performance since packets are individually hashed. The amount of tap point pairs and interface groups also affects performance.
Resources
Sharing L3 Delivery Interfaces Across Services
Overview
A DMF Controller will allow multiple managed services to share a delivery interface with an IP address, commonly called an L3 delivery interface. These interfaces redirect the packets processed by managed services to the required tool nodes for further analysis. Sharing an L3 delivery interface is useful when applying different actions to a packet that otherwise cannot be chained together in one managed service when sending it to the same destination.
Currently, only a few managed service actions require L3 delivery interfaces. Since these actions must be the terminating action in a given service action chain, DMF allows different services to use the same L3 delivery interface.
This feature applies to all platforms that support managed services with L3 delivery interfaces and the following managed service actions using L3 deliveries:
Configuration
Log into the DMF Controller, select a switch, and create an L3 delivery interface.
dmf-controller-1> enable
dmf-controller-1# configure
dmf-controller-1(config)# switch switch1
dmf-controller-1(config-switch)# interface ethernet31
dmf-controller-1(config-switch-if)# role delivery interface-name l3-d1 ip-address 0.0.0.1 nexthop-ip 0.0.0.1 255.255.255.0 nexthop-arp-interval 5
dmf-controller-1(config-switch-if)# rate-limit 1000
Create two or more managed services with the actions described previously as the last entry in the sequence. Use the l3-d1
interface created earlier as the l3-delivery-interface
for these services. The configuration commands needed may be different for each action. For example, using a managed service with netflow
action and another with the udp-replicate
action as shown in the following command sequences:
dmf-controller-1> enable
dmf-controller-1# configure
dmf-controller-1(config)# managed-service ms1
dmf-controller-1(config-managed-srv)# service-interface switch switch1 ethernet1
dmf-controller-1(config-managed-srv)# 1 netflow
dmf-controller-1(config-managed-srv-netflow)# collector 0.0.0.2 udp-port 2055 mtu 1500
dmf-controller-1(config-managed-srv-netflow)# l3-delivery-interface l3-d1
Show Commands
Use the show running-config switch switch1 interface l3
interface
command to view the configured L3 interface:
dmf-controller-1> show running-config switch switch1 interface ethernet31
! switch
switch switch1
!
interface ethernet31
force-link-up
rate-limit 1000
role delivery interface-name l3-d1 ip-address 0.0.0.1 nexthop-ip 0.0.0.2 255.255.255.0 nexthop-arp-interval 5
Use the show running-config managed-service
command to view the managed service running config:
dmf-controller-1> show running-config managed-service
! managed-service
managed-service ms1
service-interface switch switch1 ethernet1
!
1 netflow
collector 0.0.0.2 udp-port 2055 mtu 1500
l3-delivery-interface l3-d1
managed-service ms2
service-interface switch switch1 ethernet2
!
1 udp-replicate l3-d1
in-dst-ip 1.1.1.1
out-dst-ip 2.2.2.2
Out-dst-ip 1.1.1.1
Use the show switch switch1 interface l3
interface
command to view the runtime status of the interface:
dmf-controller-1> show switch switch1 interface ethernet31
# IF NameMAC AddressConfig State Adv. Features Curr Features Supported Features
-|----------|--------------------------|------|-----|-------------|-------------|------------------|
1 ethernet31 5c:16:c7:14:46:bb (Arista) up up10g 10g 10g
Use the show managed-service
command to view the runtime status of the managed services:
dmf-controller-1> show managed-service
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Managed-service ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Service Name Switch Switch Interface Installed Max Post-Service BW Max Pre-Service BW Total Post-Service BW Total Pre-Service BW
-|------------|------|----------------|---------|-------------------|------------------|---------------------|--------------------|
1 ms1switch1ethernet2True10Gbps10Gbps 1.02Kbps99bps
2 ms2switch1ethernet1True10Gbps10Gbps 1.02Kbps99bps
Service Node Management Migration L3ZTN
After the first boot (initial configuration) is completed, in the case of Layer-3 topology mode an administrator can move a Service Node (SN) from an old DANZ Monitoring Fabric (DMF) Controller to a new one via the CLI.
pre-configure
.
To migrate a Service Node's management to a new Controller, follow the steps outlined below: