Advanced Fabric Settings

This chapter describes fabric-wide configuration options required in advanced use cases for deploying DMF policies.

Configuring Advanced Fabric Settings

Overview

Before the DMF 8.4 release, the fabric-wide settings, specifically the Features section, as shown below, were available on the home page after logging in.
Figure 1. DMF Legacy Page (pre 8.4)
Beginning with DMF 8.4, a newly designed Dashboard replaces the former home page. The Features section is now the new DMF Features page. To navigate the DMF Features Page, click on the gear icon in the navigation bar.
Figure 2. Gear Icon

Page Layout

All fabric-wide configuration settings required in advanced use cases for deploying DMF policies appear in the new DMF Features Page.

Figure 3. DMF Features Page
Each card on the page corresponds to a feature set.
Figure 4. Feature Set Card Example
The UI displays the following:
  • Feature Title
  • A brief description
  • View / Hide detailed information link
  • Current Setting
  • Edit Link - Use the Edit configuration button (pencil icon) to change the value.

The fabric-wide options used with DMF policies include the following:

Table 1. Feature Set
Auto VLAN Mode Auto VLAN Range
Auto VLAN Strip CRC Check
Custom Priority Device Deployment Mode
Inport Mask Match Mode
Policy Overlap Limit Policy Overlap Limit Strict
PTP Timestamping Retain User Policy VLAN
Tunneling VLAN Preservation

Managing VLAN Tags in the Monitoring Fabric

Analysis tools often use VLAN tags to identify the filter interface receiving traffic. How VLAN IDs are assigned to traffic depends on which auto-VLAN mode is enabled. The system automatically assigns the VLAN ID from a configurable range of VLAN IDs, from 1 to 4094 by default. Available auto-VLAN modes behave as follows:
  • push-per-policy (default): Automatically adds a unique VLAN ID to all traffic selected by a specific policy. This setting enables tag-based forwarding.
  • push-per-filter: Automatically adds a unique VLAN ID from the default auto-VLAN range (1-4094) to each filter interface. A custom VLAN range can be specified using the auto-vlan-range command. Manually assign any VLAN ID not in the auto-VLAN range to a filter interface.

The VLAN ID assigned to policies or filter interfaces remains unchanged after controller reboot or failover. However, it changes if the policy is removed and added back again. Also, when the VLAN range is changed, existing assignments are discarded, and new assignments are made.

The push-per-filter feature preserves the original VLAN tag, but the outer VLAN tag is rewritten with the assigned VLAN ID if the packet already has two VLAN tags.

The following table summarizes how VLAN tagging occurs with the different auto-VLAN modes:
Table 2. VLAN Tagging Across VLAN Modes
Traffic with VLAN tag type push-per-policy Mode (Applies to all supported switches) push-per-filter Mode (Applies to all supported switches)
Untagged Pushes a single tag Pushes a single tag
Single tag Pushes an outer (second) tag Pushes an outer (second) tag
Two tags Pushes an outer (third) tag. Except on T3-based switches, it rewrites the outer tag. Due to this outer customer VLAN is replaced by DMF policy VLAN. Rewrites the outer tag. Due to this outer customer VLAN is replaced by DMF filter VLAN.
Note: When enabling push-per-policy, the auto-delivery-interface-vlan-strip feature is enabled (if disabled) before enabling push-per-policy. When enabling push-per-filter, the global delivery strip option is not enabled if previously disabled.
The following table summarizes how different auto-VLAN modes affect the applications and services supported.
Note: Matching on untagged packets cannot be applied to DMF policies when in push-per-policy mode.
Table 3. Auto-VLAN Mode Comparison
Auto-VLAN Mode Supported Platform TCAM Optimization in the Core L2 GRE Tunnels Support Q-in-Q Packets Preserve Both Original Tags Support DMF Service Node Services Manual Tag to Filter Interface
Push-per-policy (default) All Yes Yes Yes All Policy tag overwrites manual
Push-per-filter All No Yes No All Configuration not allowed
Note: Tunneling is supported with full-match or offset-match modes but not with l3-l4-match mode.

Tag-based forwarding, which improves traffic forwarding and reduces TCAM utilization on the monitoring fabric switches, is enabled only when choosing the push-per-policy option.

When the mode is push-per-filter, the VLAN that is getting pushed or rewritten can be displayed using the show interface-names command as shown below:
controller-1> show interface-names
~~~~~~~~~~~~~~~~~~~~~~~~ Filter Interface(s) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#DMF IFSwitch IF Name Dir State SpeedVLAN Tag Analytics Ip address Connected Device
--|---------------------------|-------------|------------|---|-----|------|--------|---------|----------|----------------|
1TAP-PORT-eth1 FILTER-SW1ethernet1rxup10Gbps 5True
2TAP-PORT-eth10FILTER-SW1ethernet10 rxup10Gbps 10 True
3TAP-PORT-eth12FILTER-SW1ethernet12 rxup10Gbps 11 True
4TAP-PORT-eth14FILTER-SW1ethernet14 rxup10Gbps 12 True
5TAP-PORT-eth16FILTER-SW1ethernet16 rxup10Gbps 13 True
6TAP-PORT-eth18FILTER-SW1ethernet18 rxup10Gbps 14 True
7TAP-PORT-eth20FILTER-SW1ethernet20 rxup10Gbps 16 True
8TAP-PORT-eth22FILTER-SW1ethernet22 rxup10Gbps 17 True

Auto VLAN Mode

Analysis tools often use VLAN tags to identify the filter interface receiving traffic. How VLAN IDs are assigned to traffic depends on which auto-VLAN mode is enabled. The system automatically assigns the VLAN ID from a configurable range of VLAN IDs from 1 to 4094 by default. Available auto-VLAN modes behave as follows:

  • Push per Policy (default): Automatically adds a unique VLAN ID to all traffic selected by a specific policy. This setting enables tag-based forwarding.
  • Push per Filter: Automatically adds a unique VLAN ID from the default auto-vlan-range (1-4094) to each filter interface. A new vlan range can be specified using the auto-vlan-range command. Manually assign any VLAN ID not in the auto-VLAN range to a filter interface.

The following table summarizes how VLAN tagging occurs with the different Auto VLAN modes.

Traffic with VLAN tag type push-per-policy Mode

(Applies to all supported switches)

push-per-filter Mode

(Applies to all supported switches)

Untagged Pushes a single tag Pushes a single tag
Single tag Pushes an outer (second) tag Pushes an outer (second) tag
Two tags Pushes an outer (third) tag. Except on T3-based switches, it rewrites the outer tag. Due to this outer customer VLAN is replaced by DMF policy VLAN. Rewrites the outer tag. Due to this outer customer VLAN is replaced by DMF filter VLAN.
Note: When enabling push-per-policy, the auto-delivery-interface-vlan-strip feature is enabled (if disabled) before enabling push-per-policy. When enabling push-per-filter, the global delivery strip option is not enabled if previously disabled.

The following table summarizes how different Auto VLAN modes affect supported applications and services.

Note: Matching on untagged packets cannot be applied to DMF policies when in push-per-policy mode.
Auto-VLAN Mode Supported Platform TCAM Optimization in the Core L2 GRE Tunnels Support Q-in-Q Packets Preserve Both Original Tags Supported DMF Service Node Services Manual Tag to Filter Interface
Push per Policy (default) All Yes Yes Yes All Policy tag overwrites manual
Push per Filter All No Yes No All Configuration not allowed

Tag-based forwarding, which improves traffic forwarding and reduces TCAM utilization on the monitoring fabric switches, is only enabled when choosing the push-per-policy option.

Use the CLI or the GUI to configure Auto VLAN Mode as described in the following topics.

Configuring Auto VLAN Mode using the CLI

To set the auto VLAN mode, perform the following steps:

  1. When setting the auto VLAN mode to push-per-filter, define the range of automatically assigned VLAN IDs by entering the following command from config mode:
    auto-vlan-range vlan-min <start> vlan-max <end>
    Replace start and end with the first and last VLAN ID in the range. For example, the following command assigns a range of 100 VLAN IDs from 3994 to 4094:
    controller-1(config)# auto-vlan-range vlan-min 3994 vlan-max 4094
  2. Select the VLAN mode using the following command from config mode:
    auto-vlan-mode command { push-per-filter | push-per-policy }

    Find details of the impact of these options in the Managing VLAN Tags in the Monitoring Fabric section.

    For example, the following command adds a unique outer VLAN tag to each packet received on each filter interface:
    controller-1(config)# auto-vlan-mode push-per-filter
    Switching to auto vlan mode would cause policies to be re-installed. Enter "yes" (or "y")
    to continue: y
  3. To display the configured VLAN mode, enter the show fabric command, as in the following example:
    controller-1# show fabric
    ~~~~~~~~~~~~~~~~~~~~~ Aggregate Network State ~~~~~~~~~~~~~~~~~~~~~
    Number of switches: 5
    Inport masking: True
    Start time: 2018-11-02 23:42:29.183000 UTC
    Number of unmanaged services: 0
    Filter efficiency : 1:1
    Number of switches with service interfaces: 0
    Total delivery traffic (bps): 411Kbps
    Number of managed service instances : 0
    Number of service interfaces: 0
    Match mode: full-match
    Number of delivery interfaces : 13
    Max pre-service BW (bps): -
    Auto VLAN mode: push-per-filter
    Number of switches with delivery interfaces : 4
    Number of managed devices : 1
    Uptime: 2 days, 19 hours
    Total ingress traffic (bps) : 550Kbps
    Max overlap policies (0=disable): 10
    Auto Delivery Interface Strip VLAN: False
    Number of core interfaces : 219
    Max filter BW (bps) : 184Gbps
    Number of switches with filter interfaces : 5
    State : Enabled
    Max delivery BW (bps) : 53Gbps
    Total pre-service traffic (bps) : -
    Track hosts : True
    Number of filter interfaces : 23
    Number of active policies : 3
    Number of policies: 25
    ------------------------output truncated------------------------
  4. To display the VLAN IDs assigned to each policy, enter the show policy command, as in the following example:
    controller-1> show policy
    # Policy Name Action Runtime Status Type Priority Overlap Priority
    ˓→Push VLAN Filter BW Delivery BW Post Match Filter Traffic Delivery Traffic Services
    -|--------------------------|-------|--------------|----------|--------|----------------|--
    ˓→-------|---------|-----------|-------------------------|----------------|----------------
    ˓→-----|
    1 GENERATE-NETFLOW-RECORDS forward installed Configured 100 0 4
    ˓→ 100Gbps 10Gbps - - DMF-OOB-NETFLOWSERVICE
    2 P1 forward inactive Configured 100 0 1
    ˓→ - 1Gbps - -
    3 P2 forward inactive Configured 100 0 3
    ˓→ - 10Gbps - -
    4 TAP-WINDOWS10-NETWORK forward inactive Configured 100 0 2
    ˓→ 21Gbps 1Gbps - -
    5 TIMESTAMP-INCOMING-PACKETS forward inactive Configured 100 0 5
    ˓→ - 100Gbps - - DMF-OOB-TIMESTAMPINGSERVICE
    controller -1>)#
Note: The strip VLAN option, when enabled, removes the outer VLAN tag, including the VLAN ID applied by any rewrite VLAN option.

Configuring Auto VLAN Mode using the GUI

Auto VLAN Mode

  1. Control the configuration of this feature using the Edit icon by locating the corresponding card and clicking on the pencil icon.
    Figure 5. Auto VLAN Mode Config
  2. A confirmation edit dialogue window appears, displaying the corresponding prompt message.
    Figure 6. Edit VLAN Mode
  3. To configure different modes, click the drop-down arrow to open the menu.
    Figure 7. Drop-down Example
  4. From the drop-down menu, select and click on the desired mode.
    Figure 8. Push Per Policy
  5. Alternatively, enter the desired mode name in the input area.
    Figure 9. Push Per Policy
  6. Click the Submit button to confirm the configuration changes or the Cancel button to discard the changes.
    Figure 10. Submit Button
  7. After successfully setting the configuration, the current configuration status displays next to the edit button.
    Figure 11. Current Configuration Status

The following feature sets work in the same manner as the Auto VLAN Mode feature described above.

  • Device Deployment Mode
  • Match Mode

Auto VLAN Range

Auto VLAN Range

The range of automatically generated VLANs only applies when setting Auto VLAN Mode to push-per-filter. VLANs are picked from the range 1 - 4094 when not specified.

  1. Control the configuration of this feature using the Edit icon by locating the corresponding card and clicking on the pencil icon.
    Figure 12. Edit Auto VLAN Range
  2. A configuration edit dialogue window pops up, displaying the corresponding prompt message. The Auto VLAN Range defaults to 1 - 4094.
    Figure 13. Edit Auto VLAN Range
  3. Click on the Custom button to configure the custom range.
    Figure 14. Custom Button
  4. Adjust range value (minimum value: 1, maximum value: 4094). There are three ways to adjust the value of a range:
    • Directly enter the desired value in the input area, with the left side representing the minimum value of the range and the right side representing the maximum value.
    • Adjust the value by dragging the slider using a mouse. The left knob represents the minimum value of the range, while the right knob represents the maximum value.
    • Use the up and down arrow buttons in the input area to adjust the value accordingly. Pressing the up arrow increments the value by 1, while pressing the down arrow decrements it by 1.
  5. Click the Submit button to confirm the configuration changes or the Cancel button to discard the changes.
  6. After successfully setting the configuration, the current configuration status displays next to the edit button.
    Figure 15. Configuration Change Success

Configuring Auto VLAN Range using the CLI

To set the Auto VLAN Range, use the following command:

auto-vlan-range vlan-min start vlan-max end

To set the Auto VLAN Range, replace start and end with the first and last VLAN ID in the desired range.

For example, the following command assigns a range of 100 VLAN IDs from 3994 to 4094:

controller-1(config)# auto-vlan-range vlan-min 3994 vlan-max 4094

Auto VLAN Strip

The strip VLAN option removes the outer VLAN tag before forwarding the packet to a delivery interface. Only the outer tag is removed if the packet has two VLAN tags. If it has no VLAN tag, the packet is not modified. Users can remove the VLAN ID on traffic forwarded to a specific delivery interface globally for all delivery interfaces. The strip VLAN option removes any VLAN ID applied by the rewrite VLAN option.

The strip vlan option removes the VLAN ID on traffic forwarded to the delivery interface. The following are the two methods available:

  • Remove VLAN IDs fabric-wide for all delivery interfaces. This method removes only the VLAN tag added by DMF Fabric.
  • On specific delivery interfaces. This method has four options:
    • Keep all tags intact. Preserves the VLAN tag added by DMF Fabric and other tags in the traffic using strip-no-vlan option during delivery interface configuration.
    • Remove only the outer VLAN tag the DANZ Monitoring Fabric added using the strip-one-vlan option during delivery interface configuration.
    • Remove only the second (inner) tag. Preserves the VLAN (outer) tag added by DMF Fabric and removes the second (inner) tag in the traffic using the strip-second-vlan option during delivery interface configuration.
    • Remove two tags. Removes the outer VLAN tag added by DMF fabric and inner vlan tag in the traffic using the strip-two-vlan option during delivery interface configuration.
Note: The strip vlan command for a specific delivery interface overrides the fabric-wide strip vlan option.

By default, the VLAN ID is stripped when DMF adds it to enable the following options:

  • Push per Policy
  • Push per Filter
  • Rewrite VLAN under filter-interfaces

Tagging and stripping VLANs as they ingress and egress DMF differs depending on whether the switch is a Trident 3-based.

Use the CLI or the GUI to configure Auto VLAN Strip as described in the following topics.

Auto VLAN Strip using the CLI

The strip VLAN option removes the outer VLAN tag before forwarding the packet to a delivery interface. Only the outer tag is removed if the packet has two VLAN tags. If it has no VLAN tag, the packet is not modified. Users can remove the VLAN ID on traffic forwarded to a specific delivery interface or globally for all delivery interfaces. The strip VLAN option removes any VLAN ID applied by the rewrite VLAN option.

The following are the two methods available:

  • Remove VLAN IDs fabric-wide for all delivery interfaces: This method removes only the VLAN tag added by the DMF Fabric.
  • Remove VLAN IDs only on specific delivery interfaces: This method has four options:
    • Keep all tags intact. Preserves the VLAN tag added by DMF Fabric and other tags in the traffic using strip-no-vlan option during delivery interface configuration.
    • Remove only the outer VLAN tag the DANZ Monitoring Fabric added using the strip-one-vlan option during delivery interface configuration.
    • Remove only the second (inner) tag. Preserves the VLAN (outer) tag added by DMF and removes the second (inner) tag in the traffic using the strip-second-vlan option during delivery interface configuration.
    • Remove two tags. Removes the outer VLAN tag added by DMF fabric and the inner VLAN tag in the traffic using the strip-two-vlan option during delivery interface configuration.
Note: The strip vlan command for a specific delivery interface overrides the fabric-wide strip VLAN option.
By default, the VLAN ID is stripped when DMF adds it as a result of enabling the following options:
  • push-per-policy
  • push-per-filter
  • rewrite vlan under filter-interfaces

To view the current auto-delivery-interface-vlan-strip configuration, enter the following command:

controller-1> show running-config feature details
! deployment-mode
deployment-mode pre-configured
! auto-delivery-interface-vlan-strip
auto-delivery-interface-vlan-strip
! auto-vlan-mode
auto-vlan-mode push-per-policy
! auto-vlan-range
auto-vlan-range vlan-min 3200 vlan-max 4094
! crc
crc
! match-mode
match-mode full-match
! tunneling
tunneling
! allow-custom-priority
allow-custom-priority
! inport-mask
no inport-mask
! overlap-limit-strict
no overlap-limit-strict
! overlap-policy-limit
overlap-policy-limit 10
! packet-capture
packet-capture retention-days 7

To view the current auto-delivery-interface-vlan-strip state, enter the following command:

controller-1> show fabric
~~~~~~~~~~~~~~~~~~~~~ Aggregate Network State ~~~~~~~~~~~~~~~~~~~~~
Number of switches : 5
Inport masking : True
Start time : 2018-10-16 22:30:03.345000 UTC
Number of unmanaged services : 0
Filter efficiency : 3005:1
Number of switches with service interfaces : 0
Total delivery traffic (bps) : 232bps
Number of managed service instances : 0
Number of service interfaces : 0
Match mode : l3-l4-match
Number of delivery interfaces : 24
Max pre-service BW (bps) : -
Auto VLAN mode : push-per-policy
Number of switches with delivery interfaces : 5
Number of managed devices : 1
Uptime : 21 hours, 53 minutes
Total ingress traffic (bps) : 697Kbps
Max overlap policies (0=disable) : 10
Auto Delivery Interface Strip VLAN : True
To disable this global command, enter the following command:
controller-1(config-switch-if)# no auto-delivery-interface-vlan-strip

The delivery interface level command to strip the VLAN overrides the global auto-delivery-interface-vlan-strip command. For example, when global VLAN stripping is disabled or to override the default strip option on a delivery interface use the below options:

To strip the VLAN added by DMF fabric on a specific delivery interface, use the following command:
controller-1(config-switch-if)# role delivery interface-name TOOL-PORT-1 strip-one-vlan
When global VLAN stripping is enabled, it strips only the outer VLAN ID. To remove outer VLAN ID that was added by DMF as well as the inner VLAN ID, enter the following command:
controller-1(config-switch-if)# role delivery interface-name TOOL-PORT-1 strip-two-vlan
To strip only the inner VLAN ID and preserve the outer VLAN ID that DMF added, use the following command:
controller-1(config-switch-if)# role delivery interface-name TOOL-PORT-1 strip-second-vlan
To preserve the VLAN tag added by DMF and other tags in the traffic, use the following command:
controller-1(config-switch-if)# role delivery interface-name TOOL-PORT-1 strip-no-vlan
Note:For all modes VLAN ID stripping is supported at both the global and delivery interface levels. The rewrite-per-policy and rewrite-per-filter options have been removed in DMF Release 6.0 because the push-per-policy and push-per-filter options now support the related use cases.
The syntax for the strip VLAN ID feature is as follows:
controller-1(config-switch-if)# role delivery interface-name <name> [strip-no-vlan | strip-onevlan | strip-second-vlan | strip-two-vlan]

Use the option to leave all VLAN tags intact, remove the outermost tag, remove the second (inner) tag, or remove the outermost two tags, as required.

By default, VLAN stripping is enabled and the outer VLAN added by DMF is removed.

To preserve the outer VLAN tag, enter the strip-no-vlan command, as in the following example, which preserves the outer VLAN ID for traffic forwarded to the delivery interface TOOL-PORT-1:
controller-1(config-switch-if)# role delivery interface-name TOOL-PORT-1 strip-no-vlan
When global VLAN stripping is disabled, the following commands remove the outer VLAN tag, added by DMF, on packets transmitted to the specific delivery interface ethernet20 on DMF-DELIVERY-SWITCH-1:
controller-1(config)# switch DMF-DELIVERY-SWITCH-1
controller-1(config-switch)# interface ethernet20
controller-1(config-switch-if)# role delivery interface-name TOOL-PORT-1 strip-one-vlan
To restore the default configuration, which is to strip the VLAN IDs from traffic to every delivery interface, enter the following command:
controller-1(config)# auto-delivery-interface-vlan-strip
This would enable auto delivery interface strip VLAN feature. 
Existing policies will be re-computed. Enter “yes” (or “y”) to continue: yes

As mentioned earlier, tagging and stripping VLANs as they ingress and egress DMF differs based on whether the switch uses a Trident 3 chipset. The following scenarios show how DMF behaves in different VLAN modes with various knobs set.

Scenario 1

  • VLAN mode: Push per Policy
  • Filter interface on any switch except a Trident 3 switch
  • Delivery interface on any switch
  • Global VLAN stripping is enabled
Table 4. Behavior of Traffic as It Egresses with Different Strip Options on a Delivery Interface
VLAN tag type No Configuration strip-no-VLAN strip-one-VLAN strip-second-VLAN strip-two-VLAN
  DMF policy VLAN is stripped automatically on delivery inter- face using default global strip VLAN added by DMF DMF policy VLAN and customer VLAN preserved Strips the outermost VLAN that is DMF policy VLAN DMF policy VLAN is preserved and outermost customer VLAN is removed Strip two VLANs, DMF policy VLAN and customer outer VLAN removed
Untagged Packets exit DMF as untagged packets Packets exit DMF as singly tagged packets. VLAN in the packet is DMF policy VLAN. Packets exit DMF as untagged packets. Packets exit DMF as single-tagged traffic. VLAN in the packet is DMF policy VLAN. Packets exit DMF as untagged traffic.
Singly Tagged Packets exit DMF as single-tagged traffic with customer VLAN. Packets exit DMF as doubly tagged packets. Outer VLAN in the packet is DMF policy VLAN. Packets exit DMF as single-tagged traffic with customer VLAN. Packets exit DMF as single-tagged traffic. VLAN in the packet is DMF policy VLAN. Packets exit DMF as untagged traffic.
Doubly Tagged Packet exits DMF as doubly tagged traffic. Both VLANs are customer VLANs. Packet exits DMF as triple-tagged packets. Outermost VLAN in the packet is the DMF policy VLAN. Packet exits DMF as doubly tagged traffic. Both VLANs are customer VLANs. Packet exits DMF as double-tagged packets. Outer VLAN is DMF policy VLAN, inner VLAN is inner customer VLAN in the original packet. Packet exits DMF as singly tagged traffic. VLAN in the packet is the inner customer VLAN.

Scenario 2

  • VLAN Mode: Push per Policy
  • Filter interface on any switch except a Trident 3 switch
  • Delivery interface on any switch
  • Global VLAN strip is disabled
Table 5. Behavior of Traffic as It Egresses with Different Strip Options on a Delivery Interface
VLAN tag type No Configuration strip-no-VLAN strip-one-VLAN strip-second-VLAN strip-two-VLANs
  DMF policy VLAN and customer VLAN are preserved DMF policy VLAN and customer VLAN are preserved Strips only the outermost VLAN that is DMF policy VLAN DMF policy VLAN is preserved and outer most customer VLAN is removed Strip two VLANs, DMF policy VLAN and customer outer VLAN removed
Untagged Packet exits DMF as singly tagged packets. VLAN in the packet is DMF policy VLAN. Packet exits DMF as singly tagged packets. VLAN in the packet is DMF policy VLAN. Packet exits DMF as untagged packets. Packet exits DMF as single-tagged traffic. VLAN in the packet is DMF policy VLAN. Packet exits DMF as untagged traffic.
Singly Tagged Packet exits DMF as doubly tagged packets. Outer VLAN in packet is DMF policy VLAN and inner VLAN is customer outer VLAN. Packet exits DMF as doubly tagged packets. Outer VLAN in the packet is DMF policy VLAN. Packet exits DMF as single-tagged traffic with customer VLAN. Packet exits DMF as single-tagged traffic. VLAN in the packet is DMF policy VLAN. Packets exits DMF as untagged traffic.
Doubly Tagged Packet exits DMF as triple-tagged packets. Outermost VLAN in the packet is the DMF policy VLAN. Packet exits DMF as triple-tagged packets. Outermost VLAN in the packet is the DMF policy VLAN. Packet exits DMF as doubly tagged traffic. Both VLANs are customer VLANs. Packet exits DMF as doubly tagged packets. Outer VLAN is DMF policy VLAN, inner VLAN is inner customer VLAN in the original packet. Packet exits DMF as singly tagged traffic. VLAN in the packets is the inner customer VLAN.

Scenario 3

  • VLAN Mode - Push per Policy
  • Filter interface on a Trident 3 switch
  • Delivery interface on any switch
  • Global VLAN strip is enabled
Table 6. Behavior of traffic as it egresses with different strip options on a delivery interface
VLAN tag type No Configuration strip-no-VLAN strip-one-VLAN strip-second-VLAN strip-two-VLAN
  DMF policy VLAN is stripped automatically on delivery interface using default global strip VLAN added by DMF DMF policy VLAN and customer VLAN preserved Strips the outermost VLAN that is DMF policy VLAN DMF policy VLAN is preserved and outermost customer VLAN is removed Strip two VLANs , DMF policy VLAN and customer outer VLAN removed
Untagged Packet exits DMF as untagged packets. Packet exits DMF as singly tagged packets. VLAN in the packet is DMF policy VLAN. Packet exits DMF as untagged packets. Packet exits DMF as single-tagged traffic. VLAN in the packet is DMF policy VLAN. Packet exits DMF as untagged traffic.
Singly Tagged Packet exits DMF as single-tagged traffic with customer VLAN. Packet exits DMF as doubly tagged packets. Outer VLAN in the packet is DMF policy VLAN. Packet exits DMF as single tagged traffic with customer VLAN. Packet exits DMF as single-tagged traffic. VLAN in the packet is DMF policy VLAN. Packet exits DMF as untagged traffic.
Doubly Tagged Packet exits DMF as singly tagged traffic. VLAN in the packet is the inner customer VLAN Packet exits DMF as doubly tagged traffic. Outer customer VLAN is replaced by DMF policy VLAN. Packet exits DMF as singly tagged traffic. VLAN in the packet is the inner customer VLAN. Packet exits DMF as singly tagged traffic. VLAN in the packet is the DMF policy VLAN. Packet exits DMF as untagged traffic.

Scenario 4

  • VLAN Mode - Push per Filter
  • Filter interface on any switch
  • Delivery interface on any switch
  • Global VLAN strip is enabled
Table 7. Behavior of Traffic as It Egresses with Different Strip Options on a Delivery Interface
VLAN tag type No Configuration strip-no-VLAN strip-one-VLAN strip-second-VLAN strip-two-VLAN
  DMF filter VLAN is stripped automatically on delivery interface using global strip VLAN added by DMF. DMF filter VLAN and customer VLAN preserved. Strips the outermost VLAN that is DMF filter VLAN. DMF filter VLAN is preserved and outermost customer VLAN is removed. Strip two VLANs, DMF filter interface VLAN and customer outer VLAN removed.
Untagged Packet exits DMF as untagged packets. Packet exits DMF as singly tagged packets. VLAN in the packet is DMF filter interface VLAN. Packet exits DMF as untagged packets.

Packet exits DMF

as single tagged traffic. VLAN in the packet is DMF filter inter- face VLAN.

Packet exits DMF as untagged traffic.
Singly Tagged Packet exits DMF as singly tagged traffic. VLAN in the packet is the customer VLAN. Packet exits DMF as doubly tagged packets. Outer VLAN in the packet is DMF filter interface VLAN. Packet exits DMF as singly tagged traffic. VLAN in the packet is the customer VLAN. Packet exits DMF as singly tagged traffic. VLAN in the packet is DMF filter interface VLAN. Packet exits DMF as untagged traffic.
Doubly Tagged Packet exits DMF as singly tagged traffic. VLAN in the policy is the inner customer VLAN. Packet exits DMF as doubly tagged traffic. Outer customer VLAN is replaced by DMF filter interface VLAN. Packet exits DMF as singly tagged traffic. VLAN in the policy is the inner customer VLAN. Packet exits DMF as singly tagged traffic. VLAN in the policy is the DMF filter interface VLAN. Packet exits DMF as untagged traffic.

Scenario 5

  • VLAN Mode - Push per Filter
  • Filter interface on any switch
  • Delivery interface on any switch
  • Global VLAN strip is disabled
Table 8. Behavior of Traffic as It Egresses with Different Strip Options on a Delivery Interface
VLAN tag type No Configuration strip-no-VLAN strip-one-VLAN strip-second-VLAN strip-two-VLAN
  DMF filter VLAN is stripped automatically on delivery interface using global strip VLAN added by DMF. DMF filter VLAN and customer VLAN preserved. Strips the outermost VLAN that is DMF filter VLAN. DMF filter VLAN is preserved and outermost customer VLAN is removed. Strip two VLANs, DMF filter interface VLAN and customer outer VLAN removed.
Untagged Packet exits DMF as singly tagged packets. VLAN in the packet is DMF filter interface VLAN. Packet exits DMF as singly tagged packets. VLAN in the packet is DMF filter interface VLAN. Packet exits DMF as untagged packets. Packet exits DMF as singly tagged traffic. VLAN in the packet is DMF filter interface VLAN. Packet exits DMF as untagged traffic.
Singly Tagged Packet exits DMF as doubly tagged traffic. Outer VLAN in the packet is DMF filter VLAN and inner VLAN is the customer VLAN. Packet exits DMF as doubly tagged packets. Outer VLAN in the packet is DMF filter interface VLAN. Packet exits DMF as single tagged traffic. VLAN in the packet is the customer VLAN. Packet exits DMF as singly tagged traffic. VLAN in the packet is DMF filter interface VLAN. Packet exits DMF as untagged traffic.
Doubly Tagged Packet exits DMF as doubly tagged traffic. Outer customer VLAN is replaced by DMF filter interface VLAN. Packet exits DMF as doubly tagged traffic. Outer customer VLAN is replaced by DMF filter interface VLAN. Packet exits DMF as singly tagged traffic. VLAN in the policy is the inner customer VLAN. Packet exits DMF as singly tagged traffic. VLAN in the policy is the DMF filter interface VLAN. Packet exits DMF as untagged traffic.

Auto VLAN Strip using the GUI

Auto VLAN Strip

  1. A toggle button controls the configuration of this feature. Locate the corresponding card and click the toggle switch.
    Figure 16. Toggle Switch
  2. A confirm window pops up, displaying the corresponding prompt message. Click the Enable button to confirm the configuration changes or the Cancel button to cancel the configuration. Conversely, to disable the configuration, click Disable.
    Figure 17. Confirm / Enable
  3. Review any warning messages that appear in the confirmation window during the configuration process.
    Figure 18. Warning Message - Changing

The following feature sets work in the same manner as the Auto VLAN Strip feature described above.

  • CRC Check
  • Custom Priority
  • Inport Mask
  • Policy Overlap Limit Strict
  • Retain User Policy VLAN
  • Tunneling

CRC Check

If the Switch CRC option is enabled, which is the default, each DMF switch drops incoming packets that enter the fabric with a CRC error. The switch generates a new CRC if the incoming packet was modified using an option that modifies the original CRC checksum, which includes the push VLAN, rewrite VLAN, strip VLAN, and L2 GRE tunnel options.

Note: Enable the Switch CRC option to use the DMF tunneling feature.

If the Switch CRC option is disabled, DMF switches do not check the CRC of incoming packets and do not drop packets with CRC errors. Also, switches do not generate a new CRC if the packet is modified. This mode is helpful if packets with CRC errors need to be delivered to a destination tool unmodified for analysis. When disabling the Switch CRC option, ensure the destination tool does not drop packets having CRC errors. Also, recognize that CRC errors will be caused by modification of packets by DMF options so that these CRC errors are not mistaken for CRC errors from the traffic source.

Note: When the Switch CRC option is disabled, packets going to the Service Node or Recorder Node are dropped because a new CRC is not calculated when push-per-policy or push-per-filter adds a VLAN tag.

Enable and disable CRC Check using the steps described in the following topics.

CRC Check using the CLI

If the Switch CRC option is enabled, which is the default, each DMF switch drops incoming packets that enter the fabric with a CRC error. The switch generates a new CRC if the incoming packet was modified using one option that modifies the original CRC checksum, which includes the push VLAN, rewrite VLAN, strip VLAN, and L2 GRE tunnel options.
Note: Enable the Switch CRC option to use the DMF tunneling feature.

If the Switch CRC option is disabled, DMF switches do not check the CRC of incoming packets and do not drop packets with CRC errors. Also, switches do not generate a new CRC if the packet is modified. This mode is helpful if packets with CRC errors need to be delivered to a destination tool unmodified for analysis. When disabling the Switch CRC option, ensure the destination tool does not drop packets having CRC errors. Also, recognize that CRC errors will be caused by modification of packets by DMF options so that these CRC errors are not mistaken for CRC errors from the traffic source.

To disable the Switch CRC option, enter the following command from config mode:
controller-1(config)# no crc
Disabling CRC mode may cause problems to tunnel interface. Enter “yes” (or “y”) to continue: y
In the event the Switch CRC option is disabled, re-enable the Switch CRC option using the following command from config mode:
controller-1(config)# crc
Enabling CRC mode would cause packets with crc error dropped. Enter "yes" (or "y
") to continue: y
Tip: To enable or disable the CRC through the GUI, refer to the chapter, Check CRC using the GUI.
Note: When the Switch CRC option is disabled, packets going to the service node or recorder node are dropped because a new CRC is not calculated when push-per-policy or push-per-filter adds a VLAN tag.

CRC Check using the GUI

From the DMF Features page, proceed to the CRC Check feature card and perform the following steps to enable the feature.
  1. Select the CRC Check card.
    Figure 19. CRC Check Disabled
  2. Toggle the CRC Check to On.
  3. Confirm the activation by clicking Enableor Cancel to return to the DMF Features page.
    Figure 20. Enable CRC Check
  4. CRC Check is running.
    Figure 21. CRC Check Enabled
  5. To disable the feature, toggle the CRC Check to Off. Click Disable and confirm.
    Figure 22. Disable CRC Check
    The feature card updates with the status.
    Figure 23. CRC Check Disabled

Custom Priority

When custom priorities are allowed, non-admin users may assign policy priorities between 0 and 100 (the default value). However, when custom priorities are not allowed, the default priority of 100 will be automatically assigned to non-admin users' policies.

Enable and disable Custom Priority using the steps described in the following topics.

Configuring Custom Priority using the GUI

From the DMF Features page, proceed to the Custom Priority feature card and perform the following steps to enable the feature.
  1. Select the Custom Priority card.
    Figure 24. Custom Priority Disabled
  2. Toggle the Custom Priority to On.
  3. Confirm the activation by clicking Enableor Cancel to return to the DMF Features page.
    Figure 25. Enable Custom Priority
  4. Custom Priority is running.
    Figure 26. Custom Priority Enabled
  5. To disable the feature, toggle the Custom Priority to Off. Click Disable and confirm.
    Figure 27. Disable Custom Priority
    The feature card updates with the status.
    Figure 28. Custom Priority Disabled

Configuring Custom Priority using the CLI

To enable the Custom Priority, enter the following command:

controller-1(config)# allow-custom-priority

To disable the Custom Priority, enter the following command:

controller-1(config)# no allow-custom-priority

Device Deployment Mode

Complete the fabric switch installation in one of the following two modes:

  • Layer 2 Zero Touch Fabric (L2ZTF, Auto-discovery switch provisioning mode)

    In this mode, which is the default, Switch ONIE software automatically discovers the Controller via IPv6 local link addresses and downloads and installs the appropriate Switch Light OS image from the Controller. This installation method requires all the fabric switches and the DMF Controller to be in the same Layer 2 network (IP subnet). Also, suppose the fabric switches need IPv4 addresses to communicate with SNMP or other external services. In that case, users must configure IPAM, which provides the controller with a range of IPv4 addresses to allocate to the fabric switches.

  • Layer 3 Zero Touch Fabric (L3ZTF, Preconfigured switch provisioning mode)

    When fabric switches are in a different Layer 2 network from the Controller, log in to each switch individually to configure network information and download the ZTF installer. Subsequently, the switch automatically downloads Switch Light OS from the Controller. This mode requires communication between the Controller and the fabric switches using IPv4 addresses, and no IPAM configuration is required.

The following table summarizes the requirements for installation using each mode:
Requirement Layer 2 mode Layer 3 mode
Any switch in a different subnet from the controller No Yes
IPAM configuration for SNMP and other IPv4 services Yes No
IP address assignment IPv4 or IPv6 IPv4
Refer to this section (in User Guide) Using L2 ZTF (Auto-Discovery) Provisioning Mode Changing to Layer 3 (Pre-Configured) Switch Provisioning Mode

All the fabric switches in a single fabric must be installed using the same mode. If users have any fabric switches in a different IP subnet than the Controller, users must use Layer 3 mode for installing all the switches, even those in the same Layer 2 network as the Controller. Installing switches in mixed mode, with some switches using ZTF in the same Layer 2 network as the Controller, while other switches in a different subnet are installed manually or using DHCP is unsupported.

Configuring Device Deployment Mode using the GUI

From the DMF Features page, proceed to the Device Deployment Mode feature card and perform the following steps to manage the feature.

  1. Select the Device Deployment Mode card.
    Figure 29. Device Deployment Mode - Auto Discovery
  2. Enter the edit mode using the pencil icon.
    Figure 30. Configure Device Deployment Mode
  3. Change the switching mode as required using the drop-down menu. The default mode is Auto Discovery.
    Figure 31. Device Deployment Mode Options
    Figure 31. Device Deployment Mode - Pre-Configured Option
  4. Click Submit and confirm the operation when prompted.
  5. The Device Deployment Mode status updates.
    Figure 33. Device Deployment Mode - Status Update

Configuring Device Deployment Mode using the CLI

Device Deployment Mode has two options: select the desired option, either auto-discovery or pre-configured, as shown below:
controller-1(config)# deployment-mode auto-discovery

Changing device deployment mode requires modifying switch configuration. Enter "yes" (or "y") to continue: y
controller-1(config)# deployment-mode pre-configured

Changing device deployment mode requires modifying switch configuration. Enter "yes" (or "y") to continue: y

Inport Mask

Enable and disable Inport Mask using the steps described in the following topics.

InPort Mask using the CLI

DANZ Monitoring Fabric implements multiple flow optimizations to reduce the number of flows programmed in the DMF switch TCAM space. This feature enables effective usage of TCAM space, and it is on by default.

When this feature is off, TCAM rules are applied for each ingress port belonging to the same policy. For example, in the following topology, if a policy was configured with 10 match rules and filter-interface as F1 and F2, then 20 (10 for F1 and 10 for F2) TCAM rows were consumed.
Figure 34. Simple Inport Mask Optimization

With inport mask optimization, only 10 rules are consumed. This feature optimizes TCAM usage at every level (filer, core, delivery) in the DMF network.

Consider the more complex topology illustrated below:
Figure 35. Complex Inport Mask Optimization

In this topology, if a policy has N rules without in-port optimization, the policy will consume 3N at Switch 1, 3N at Switch 2, and 2N at Switch 3. With the in-port optimization feature enabled, the policy consumes only N rules at each switch.

However, this feature loses granularity in the statistics available because there is only one set of flow mods for multiple filter ports per switch. Statistics without this feature are maintained per filter port per policy.

With inport optimization enabled, the statistics are combined for all input ports sharing rules on that switch. The option exists to obtain filter port statistics for different flow mods for each filter port. However, this requires disabling inport optimization, which is enabled by default.

To disable the inport optimization feature, enter the following command from config mode:
controller-1(config)# controller-1(config)# no inport-mask

Inport Mask using the GUI

From the DMF Features page, proceed to the Inport Mask feature card and perform the following steps to enable the feature.
  1. Select the Inport Mask card.
    Figure 36. Inport Mask Disabled
  2. Toggle the Inport Mask to On.
  3. Confirm the activation by clicking Enableor Cancel to return to the DMF Features page.
    Figure 37. Enable Inport Mask
  4. Inport Mask is running.
    Figure 38. Inport Mask Enabled
  5. To disable the feature, toggle the Inport Mask to Off. Click Disable and confirm.
    Figure 39. Disable Inport Mask
    The feature card updates with the status.
    Figure 40. Inport Mask Disabled

Match Mode

Switches have finite hardware resources available for packet matching on aggregated traffic streams. This resource allocation is relatively static and configured in advance. The DANZ Monitoring Fabric supports three allocation schemes, referred to as switching (match) modes:
  • L3-L4 mode (default mode): With L3-L4 mode, fields other than src-mac and dst-mac can be used for specifying policies. If no policies use src-mac or dst-mac, the L3-L4 mode allows more match rules per switch.
  • Full-match mode: With full-match mode, all matching fields, including src-mac and dst-mac, can be used while specifying policies.
  • L3-L4 Offset mode: L3-L4 offset mode allows matching beyond the L4 header up to 128 bytes from the beginning of the packet. The number of matches per switch in this mode is the same as in full-match mode. As with L3-L4 mode, matches using src-mac and dst-mac are not permitted.
    Note: Changing switching modes causes all fabric switches to disconnect and reconnect with the Controller. Also, all existing policies will be reinstalled. The switching mode applies to all DMF switches in the DANZ Monitoring Fabric. Switching between modes is possible, but any match rules incompatible with the new mode will fail.

Setting the Match Mode Using the CLI

To use the CLI to set the match mode, enter the following command:
controller-1(config)# match-mode {full-match | l3-l4-match | l3-l4-offset-match}
For example, the following command sets the match mode to full-match mode:
controller-1(config)# match-mode full-match

Setting the Match Mode Using the GUI

From the DMF Features page, proceed to the Match Mode feature card and perform the following steps to enable the feature.

  1. Select the Match Mode card.
    Figure 41. L3-L4 Match Mode
  2. Enter the edit mode using the pencil icon.
    Figure 42. Configure Switching Mode
  3. Change the switching mode as required using the drop-down menu. The default mode is L3-L4 Match.
    Figure 43. L3-L4 Match Options
  4. Click Submit and confirm the operation when prompted.
Note: An error message is displayed if the existing configuration of the monitoring fabric is incompatible with the specified switching mode.

Retain User Policy VLAN

Enable and disable Retain User Policy VLAN using the steps described in the following topics.

Retain User Policy VLAN using the CLI

This feature will send traffic to a delivery interface with the user policy VLAN tag instead of the overlap dynamic policy VLAN tag for traffic matching the dynamic overlap policy only. This feature is supported only in push-per-policy mode. For example, policy P1 with filter interface F1 and delivery interface D1, and policy P2 with filter interface F1 and delivery interface D2, and overlap dynamic policy P1_o_P2 is created when the overlap policy condition is met. In this case, the overlap dynamic policy is created with filter interface F1 and delivery interfaces D1 and D2. The user policy P1 assigns a VLAN (VLAN 10) and P2 assigns a VLAN (VLAN 20) when it is created, and the overlap policy also assigns a VLAN (VLAN 30) when it is dynamically created. When this feature is enabled, traffic forwarded to D1 will have a policy VLAN tag of P1 (VLAN 10) and D2 will have a policy VLAN tag of policy P2 (VLAN 20). When this feature is disabled, traffic forwarded to D1 and D2 will have the dynamic overlap policy VLAN tag (VLAN 30). By default, this feature is disabled.

Feature Limitations:

  • An overlap dynamic policy will fail when the overlap policy has filter (F1) and delivery interface (D1) on the same switch (switch A) and another delivery interface (D2) on another switch (switch B).
  • Post-to-delivery dynamic policy will fail when it has a filter interface (F1) and a delivery interface (D1) on the same switch (switch A) and another delivery interface (D2) on another switch (switch B).
  • Overlap policies may be reinstalled when a fabric port goes up or down when this feature is enabled.
  • Double-tagged VLAN traffic is not supported and will be dropped at the delivery interface.
  • Tunnel interfaces are not supported with this feature.
  • Only IPv4 traffic is supported; other non-IPv4 traffic will be dropped at the delivery interface.
  • Delivery interfaces with IP addresses (L3 delivery interfaces) are not supported.
  • This feature is not supported on EOS switches (Arista 7280 switches).
  • Delivery interface statistics may not be accurate when displayed using the sh policy command. This will happen when policy P1 has F1, D1, D2 and policy P2 has F1, D2. In this case, overlap policy P1_o_P2 will be created with delivery interfaces D1, D2. Since D2 is in both policies P1 and P2, overlap traffic will be forwarded to D2 with both the P1 policy VLAN and the P2 policy VLAN. The sh policy <policy_name> command will not show this doubling of traffic on delivery interface D2. Delivery interface statistics will show this extra traffic forwarded from the delivery interface.
To enable this feature, enter the following command:
controller-1(config)# retain-user-policy-vlan
This will enable retain-user-policy-vlan feature. Non-IP packets will be dropped at delivery. Enter
"yes" (or "y") to continue: yes
To see the current Retain Policy VLAN configuration, enter the following command:
controller-1> show fabric
~~~~~~~~~~~~~~~~~~~~~~~~~ Aggregate Network State ~~~~~~~~~~~~~~~~~~~~~~~~~
Number of switches : 14
Inport masking : True
Number of unmanaged services : 0
Number of switches with service interfaces : 0
Match mode : l3-l4-offset-match
Number of switches with delivery interfaces : 11
Filter efficiency : 1:1
Uptime : 4 days, 8 hours
Max overlap policies (0=disable) : 10
Auto Delivery Interface Strip VLAN : True
Number of core interfaces : 134
State : Enabled
Max delivery BW (bps) : 2.18Tbps
Health : unhealthy
Track hosts : True
Number of filter interfaces : 70
Number of policies : 101
Start time : 2022-02-28 16:18:01.807000 UTC
Number of delivery interfaces : 104
Retain User Policy Vlan : True

Use this feature with the strip-second-vlan option during delivery interface configuration to preserve the outer DMF fabric policy VLAN, strip the inner VLAN of traffic forwarded to a tool, or the strip-no-vlan option during delivery interface configuration.

Retain User Policy VLAN using the GUI

From the DMF Features page, proceed to the Retain User Policy VLAN feature card and perform the following steps to enable the feature.
  1. Select the Retain User Policy VLAN card.
    Figure 44. Retain User Policy VLAN Disabled
  2. Toggle the Retain User Policy VLAN to On.
  3. Confirm the activation by clicking Enableor Cancel to return to the DMF Features page.
    Figure 45. Enable Retain User Policy VLAN
  4. Retain User Policy VLAN is running.
    Figure 46. Retain User Policy VLAN Enabled
  5. To disable the feature, toggle the Retain User Policy VLAN to Off. Click Disable and confirm.
    Figure 47. Disable Retain User Policy VLAN
    The feature card updates with the status.
    Figure 48. Retain User Policy VLAN Disabled

Tunneling

For more information about Tunneling please refer to the Understanding Tunneling section.

Enable and disable Tunneling using the steps described in the following topics.

Configuring Tunneling using the GUI

From the DMF Features page, proceed to the Tunneling feature card and perform the following steps to enable the feature.
  1. Select the Tunneling card.
    Figure 49. Tunneling Disabled
  2. Toggle Tunneling to On.
  3. Confirm the activation by clicking Enable or Cancel to return to the DMF Features page.
    Figure 50. Enable Tunneling
    Note: CRC Check must be running before attempting to enable Tunneling. An error message displays if CRC Check is not enabled. Proceeding to click Enable results in a validation error message. Refer to the CRC Check section for more information on configuring the CRC Check feature.
    Figure 51. CRC Check Warning Message
  4. Tunneling VLAN is running.
    Figure 52. Tunneling Enabled
  5. To disable the feature, toggle Tunneling to Off. Click Disable and confirm.
    Figure 53. Disable Tunneling
    The feature card updates with the status.
    Figure 54. Tunneling VLAN Disabled

Configuring Tunneling using the CLI

To enable the Tunneling, enter the following command:
controller-1(config)# tunneling 

Tunneling is an Arista Licensed feature. 
Please ensure that you have purchased the license for tunneling before using this feature. 
Enter "yes" (or "y") to continue: y
controller-1(config)#
To disable the Tunneling, enter the following command:
controller-1(config)# no tunneling 
This would disable tunneling feature? Enter "yes" (or "y") to continue: y
controller-1(config)#

VLAN Preservation

In DANZ Monitoring Fabric (DMF), metadata is appended to the packets forwarded by the fabric to a tool attached to a delivery interface. This metadata is encoded primarily in the outer VLAN tag of the packets.

By default (using the auto-delivery-strip feature), this outer VLAN tag is always removed on egress upon delivery to a tool.

The VLAN preservation feature introduces a choice to selectively preserve a packet's outer VLAN tag instead of stripping or preserving all of it.

VLAN preservation works in both push-per-filter and push-per-policy mode for auto-assigned and user-configured VLANs.

Note: VLAN preservation applies to switches running SWL OS and does not apply to switches running EOS.

This functionality only supports 2000 VLAN IDs and port combinations per switch.

Support for VLAN preservation is on select Broadcom® switch ASICs. Ensure your switch model supports this feature before attempting to configure it.

Using the CLI to Configure VLAN Preservation

Configure VLAN preservation at two levels: global and local. A local configuration can override the global configuration.

Global Configuration

Enable VLAN preservation globally using the vlan-preservation command from the config submode to apply aglobal configuration.

(config)# vlan-preservation
Two options exist while in the config-vlan-preservation submode:
  • preserve-user-configured-vlans
  • preserve-vlans

Use the help function to list the options by entering a ? (question mark).

(config-vlan-preservation)# ?
Commands:
preserve-user-configured-vlans Preserve all user-configured VLANs for all delivery interfaces
preserve-vlanConfigure VLAN ID to preserve for all delivery interfaces

Use the preserve-user-configured-vlans option to preserve all user-configured VLANs. The packets with the user-configured VLANs will have their fabric-applied VLAN tags preserved even after leaving the respective delivery interface.

(config-vlan-preservation)# preserve-user-configured-vlans

Use the preserve-vlan option to specify and preserve a particular VLAN ID. Any VLAN ID may be provided. In the following example, the packets with VLAN ID 100 or 200 will have their fabric-applied VLAN tags preserved upon delivery to the tool.

(config-vlan-preservation)# preserve-vlan 100
(config-vlan-preservation)# preserve-vlan 200

Local Configuration

This feature applies to delivery and both-filter-and-delivery interface roles.

Fabric-applied VLAN tag preservation can be enabled locally on each delivery interface as an alternative to the global VLAN preservation configuration. To enable this functionality locally, enter the following configuration submode using the if-vlan-preservation command to specify either one of the two available options. Use the help function to list the options by entering a ? (question mark).

(config-switch-if)# if-vlan-preservation
(config-switch-if-vlan-preservation)# ?
Commands:
preserve-user-configured-vlans Preserve all user-configured VLANs for all delivery interfaces
preserve-vlanConfigure VLAN ID to preserve for all delivery interfaces

Use the preserve-user-configured-vlans option to preserve all user-configured VLAN IDs in push-per-policy or push-per-filter mode on a selected delivery interface. All packets egressing such delivery interface will have their user-configured fabric VLAN tags preserved.

(config-switch-if-vlan-preservation)# preserve-user-configured-vlans

Use the preserve-vlan option to specify and preserve a particular VLAN ID. For example, if any packets with VLAN ID 100 or 300 egress the selected delivery interface, VLAN IDs 100 and 300 will be preserved.

(config-switch-if-vlan-preservation)# preserve-vlan 100
(config-switch-if-vlan-preservation)# preserve-vlan 300
Note: Any local vlan-preservation configuration overrides the global configuration for the selected interfaces by default.

On an MLAG delivery interface, the local configuration follows the same model, as shown below.

(config-mlag-domain-if)# if-vlan-preservation member role 
(config-mlag-domain-if)# if-vlan-preservation 
(config-mlag-domain-if-vlan-preservation)# preserve-user-configured-vlans preserve-vlan

To disable selective VLAN preservation for a particular delivery or both-filter-and-delivery interface, use the following command to disable the feature's global and local configuration for the selected interface:

(config-switch-if)# role delivery interface-name del 
<cr> no-analyticsstrip-no-vlan strip-second-vlan
ip-addressno-vlan-preservationstrip-one-vlanstrip-two-vlan
(config-switch-if)# role delivery interface-name del no-vlan-preservation

CLI Show Commands

The following show command displays the device name on which VLAN preservation is enabled and the information about which VLAN is preserved on specific selected ports. Use the data in this table primarily for debugging purposes.

# show switch all table vlan-preserve 
# Vlan-preserve Device name Entry key
-|-------------|-----------|----------------------|
1 0 delivery1 VlanVid(0x64), Port(6)
2 0 filter1 VlanVid(0x64), Port(6)
3 0 core1 VlanVid(0x64), Port(6)

Using the GUI to Configure VLAN Preservation

VLAN preservation can be configured at global and local levels. A local configuration can override the global configuration. Follow the steps outlined below to configure a Global Configuration (steps 1 - 4), Local Configuration (steps 5-7), or an MLAG Delivery Interface configuration within an MLAG domain (step 8).
Global Configuration
  1. To view or edit the global configuration, navigate to the DANZ Monitoring Fabric (DMF) Features page by clicking the gear icon in the top right of the navigation bar.
    Figure 55. DMF Menu Bar
    The DMF Feature allows for managing fabric-wide settings for DMF.
  2. Scroll to the VLAN Preservation card.
    Figure 56. DMF Features Page
    Figure 57. VLAN Preservation Card
  3. Click the Edit button (pencil icon) to configure or modify the global VLAN Preservation feature settings.
    Figure 58. Edit VLAN Preservation Configuration
    The edit screen has two input sections:
    • Toggle on or off the Preserve User Configured VLANs.
    • Enter the parameters for VLAN Preserve using the following functions:
      • Use the Add VLAN button to add VLAN IDs.
      • Select the Single VLAN type drop-down to add a single VLAN ID.
      • Select the Range VLAN type drop-down to add a continuous VLAN ID range.
      • Use the Trash button (delete) to delete a single VLAN ID or a VLAN ID range.
  4. Click the Submit button to save the configuration.
     
Local Configuration
  1. The VLAN Preservation configuration can be applied per-delivery interface while configuring or editing a delivery or filter-and-delivery interface in the Monitoring Interfaces page and Monitoring Interfaces > Delivery Interfaces page.
    Figure 59. Monitoring Interfaces Delivery Interface Create Interface
  2. The following inputs are available for the local feature configuration:
    • Toggle the Preserve User Configured VLANs button to on. Use this option to preserve all user-configured VLAN IDs in push-per-policy or push-per-filter mode on a selected delivery interface. The packets with the user-configured VLANs will have their fabric-applied VLAN tags preserved even after leaving the respective delivery interface.
    • VLAN Preserve. Use the + and - icon buttons to add and remove VLAN IDs.
    • Toggle the VLAN Preservation button to on. Disabling this option will ignore this feature configuration given globally/locally for this delivery interface. VLAN Preservation is enabled by default.
  3. Click the Save button to save the configuration.
     
VLAN Preservation for MLAG Delivery Interfaces
  1. Configure VLAN preservation for MLAG delivery interfaces using the Fabric > MLAGs page while configuring an MLAG Domain toggling the VLAN Preservation and Preserve User Configured VLANs switches to on (as required).
    Figure 60. Create MLAG Domain
    Figure 61. MLAG VLAN Preservation & Preserve User Configured VLANs (expanded view)

Troubleshooting

Use the following commands to troubleshoot the scenario in which a tool attached to a delivery interface expects a packet with a preserved VLAN tag, but instead, there is no tag attached to it; double-check the following.
  1. A partial policy installation may occur if any delivery interface fails to preserve the VLAN tag. This can happen when exceeding the 2000 VLAN ID/Port combination limit. Use the show policy policy-name command to obtain a detailed status, as shown in the following example:
    (config)# show policy vlan-999
    Policy Name: vlan-999
    Config Status: active - forward
    Runtime Status : installed but partial failure
    Detailed Status: installed but partial failure - 
     Failed to preserve VLAN's on some/all 
     delivery interfaces, see warnings for details
    Priority : 100
    Overlap Priority : 0
    # of switches with filter interfaces : 1
    # of switches with delivery interfaces : 1
    # of switches with service interfaces: 0
    # of filter interfaces : 1
    # of delivery interfaces : 1
    # of core interfaces : 2
    # of services: 0
    # of pre service interfaces: 0
    # of post service interfaces : 0
    Push VLAN: 999
    Post Match Filter Traffic: -
    Total Delivery Rate: -
    Total Pre Service Rate : -
    Total Post Service Rate: -
    Overlapping Policies : none
    Component Policies : none
    Installed Time : 2023-11-06 21:01:11 UTC
    Installed Duration : 1 week
  2. Verify the running config and review if the VLAN preservation configuration is enabled for that VLAN ID and on that delivery interface.
    (config-vlan-preservation)# show running-config | grep "preserve"
    ! vlan-preservation
    vlan-preservation
    preserve-vlan 100
  3. Verify the show switch switch-name table vlan-preserve output. It displays the ports and VLAN ID combinations that are enabled.
    (config-policy)# show switch core1 table vlan-preserve
    # Vlan-preserve Device name Entry key
    -|-------------|-----------|----------------------|
    1 0 core1 VlanVid(0x64), Port(6)
  4. The same configuration can be verified from a switch (e.g., core1) by using the command below:
    root@core1:~# ofad-ctl gt vlan_preserve
    VLAN PRESERVE TABLE:
    --------------------
    VLAN:100Port:6PortClass:6
  5. Verify if a switch has any associated preserve VLAN warnings among the fabric warnings:
    (config-vlan-preservation)# show fabric warnings | grep "preserve
    1 delivery1 (00:00:52:54:00:85:ca:51) Switch 00:00:52:54:00:85:ca:51 
    cannot preserve VLANs for some interfaces due to resource exhaustion.
  6. The show fabric warnings feature-unsupported-on-device command provides information on whether VLAN preservation is configured on any unsupported devices:
    (config-switch)# show fabric warnings feature-unsupported-on-device 
    # Name Warning
    -|----|------------------------------------------------------------|
    1 del1 VLAN preservation feature is not supported on EOS switch eos
In the event of any preserve VLAN fabric warnings, please contact the This email address is being protected from spambots. You need JavaScript enabled to view it. for assistance.

Reuse of Policy VLANs

Policies can reuse VLANs for policies in different switch islands. A switch island is an isolated fabric managed by a single pair of controllers; there is no data plane connection between fabrics in different switch islands. For example, with a single Controller pair managing six switches (switch1, switch2, switch3, switch4, switch5, and switch6 the option exists to create two fabrics with three switches each (switch1, switch2, switch3 in one switch island and switch4, switch5, and switch6 in another switch island), as long as there is no data plane connection between switches in the different switch islands.

There is no command needed to enable this feature. If the above condition is met, creating policies in each switch island with the same policy VLAN tag is supported.

In the condition mentioned above, assign the same policy VLAN to two policies in different switch islands using the push-vlan <vlan-tag> command under policy configuration. For example, policy P1 in switch island 1 assigned push-vlan 10, and policy P2 in switch island 2 assigned the same vlan tag 10 using push-vlan 10 under policy configuration.

When a data plane link connects two switch islands, it becomes one switch island. In that case, two policies cannot use the same policy vlan tag, so one of the policies (P1 or P2) will become inactive.

Rewriting the VLAN ID for a Filter Interface

When sharing a destination tool with multiple filter interfaces, use the VLAN identifier assigned by the rewrite VLAN option to identify the ingress filter interface for specific packets. To use the rewrite VLAN option, assign a unique VLAN identifier to each filter interface. Ensure this VLAN ID is outside of the auto-VLAN range.
Note: In push-per-policy mode, enabling the rewrite VLAN feature on filter interfaces is impossible. When doing so, a validation error is displayed. This feature is available only in the push-per-filter mode.
The following commands change the VLAN tag on packets received on the interface ethernet10 on f-switch1 to 100. The role command in this example also assigns the alias TAP-PORT-1 to Ethernet interface 10.
controller-1(config)# switch f-switch1
controller-1(config-switch-if)# interface ethernet10
controller-1(config-switch-if)# role filter interface-name TAP-PORT-1 rewrite vlan 100
The rewrite VLAN option overwrites the original VLAN frame tag if it was already tagged, and this changes the CRC checksum so it no longer matches the modified packet. The switch CRC option, enabled by default, rewrites the CRC after the frame has been modified so that a CRC error does not occur.
Note: Starting with DMF Release 7.1.0, simultaneously rewriting the VLAN ID and MAC address is supported and uses VLAN rewriting to isolate traffic while using MAC rewriting to forward traffic to specific VMs.

Reusing Filter Interface VLAN IDs

A DMF fabric comprises groups of switches, known as islands, connected over the data plane. There are no data plane connections between switches in different islands. When Push-Per-Filter forwarding is enabled, monitored traffic is forwarded within an island using the VLAN ID affiliated with a Filter Interface. These VLAN IDs are configurable. Previously, the only recommended configuration was for these VLAN IDs to be globally unique.

This feature adds official support for associating the same VLAN ID with multiple Filter Interfaces as long as they are in different islands. This feature provides more flexibility when duplicating Filter Interface configurations across islands and helps prevent using all available VLAN IDs.

Note that within each island, VLAN IDs must still be unique, which means that Filter Interfaces in the same group of switches cannot have the same ID. When trying to reuse the same VLAN ID within an island, DMF generates a fabric error, and only the first Filter Interface (as sorted alphanumerically by DMF name) remains in use.

Configuration

This feature requires no special configuration beyond the existing Filter Interface configuration workflow.

Troubleshooting

A fabric error occurs if the same VLAN ID is configured more than once in the same island. The error message includes the Filter Interface name, the switch name, and the VLAN ID that is not unique. When encountering this error, pick a different non-conflicting VLAN ID.

Filter Interface invalid VLAN errors can be displayed in the CLI using the following command:

The following is a vertical representation of the CLI output above for illustrative purposes only.

>show fabric errors filter-interface-invalid-vlan 
~~ Invalid Filter Interface VLAN(s) ~~
# 1 
DMF Namefilter1-f1
IF Name ethernet2
Switchfilter1 (00:00:52:54:00:4b:c9:bc)
Rewrite VLAN1
Details The configured rewrite VLAN 1 for filter interface filter1-f1
is not unique within its fabric. 
It is helpful to know all of the switches in an island. The following command lists all of the islands (referred to in this command as switch clusters) and their switch members:
>show debug switch-cluster 
# Member 
-|--------------|
1 core1, filter1
It can also be helpful to know how the switches within an island are interconnected. Use the following command to display all the links between the switches:
>show link all
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Links ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Active State Src switch Src IF Name Dst switch Dst IF Name Link Type Since 
-|------------|----------|-----------|----------|-----------|---------|-----------------------|
1 active filter1ethernet1 core1ethernet1 normal2023-05-24 22:31:39 UTC
2 active core1ethernet1 filter1ethernet1 normal2023-05-24 22:31:40 UTC

Considerations

  • VLAN IDs must be unique within an island. Filter Interfaces in the same island with the same VLAN ID are not supported.
  • This feature only applies to manually configured Filter Interface VLAN IDs. VLAN IDs that are automatically assigned are still unique across the entire fabric.

Using Push-per-filter Mode

The push-per-filter mode setting does not enable tag-based forwarding. Each filter interface is automatically assigned a VLAN ID; the default range is 1 to 4094. To change the range, use the auto-vlan-range command.

The option exists to manually assign a VLAN not included in the defined range to a filter interface.

To manually assign a VLAN to a filter interface in push-per-filter mode, complete the following steps:

  1. Change the auto-vlan-range from the default (1-4094) to a limited range, as in the following example:
    controller-1(config)# auto-vlan-range vlan-min 1 vlan-max 1000

    The example above configures the auto-VLAN feature to use VLAN IDs from 1 to 1000.

  2. Assign a VLAN ID to the filter interface that is not in the range assigned to the auto-VLAN feature.
    controller-1(config)# role filter interface-name TAP-1 rewrite vlan 1001

Tag-based Forwarding

The DANZ Monitoring Fabric (DMF) Controller configures each switch with forwarding paths based on the most efficient links between the incoming filter interface and the delivery interface, which is connected to analysis tools. The TCAM capacity of the fabric switches may limit the number of policies to configure. The Controller can also use VLAN tag-based forwarding, which reduces the TCAM resources required to implement a policy.

Tag-based forwarding is automatically enabled when the auto-VLAN Mode is push-per-policy, which is the default. This configuration improves traffic forwarding within the monitoring fabric. DMF uses the assigned VLAN tags to forward traffic to the correct delivery interface, saving TCAM space. This feature is handy when using switches based on the Tomahawk chipset because these switches have higher throughput but reduced TCAM space.

Policy Rule Optimization

 

Prefix Optimization

A policy can match with a large number of IPv4 or IPv6 addresses. These matches can be configured explicitly on each match rule, or the match rules can use an address group. With prefix optimization based on IPv4, IPv6, and TCP ports, DANZ Monitoring Fabric (DMF) uses efficient masking algorithms to minimize the number of flow entries in hardware.

Example 1: Optimize the same mask addresses.
controller-1(config)# policy ip-addr-optimization
controller-1(config-policy)# action forward
controller-1(config-policy)# delivery-interface TOOL-PORT-1
controller-1(config-policy)# filter-interface TAP-PORT-1
controller-1(config-policy)# 10 match ip dst-ip 1.1.1.0 255.255.255.255
controller-1(config-policy)# 11 match ip dst-ip 1.1.1.1 255.255.255.255
controller-1(config-policy)# 12 match ip dst-ip 1.1.1.2 255.255.255.255
controller-1(config-policy)# 13 match ip dst-ip 1.1.1.3 255.255.255.255
controller-1(config-policy)# show policy ip-addr-optimization optimized-match
Optimized Matches :
10 ether-type 2048 dst-ip 1.1.1.0 255.255.255.252
Example 2: In this case, if a generic prefix exists, all the specific addresses are not programmed in TCAM.
controller-1(config)# policy ip-addr-optimization
controller-1(config-policy)# action forward
controller-1(config-policy)# delivery-interface TOOL-PORT-1
controller-1(config-policy)# filter-interface TAP-PORT-1
controller-1(config-policy)# 10 match ip dst-ip 1.1.1.0 255.255.255.255
controller-1(config-policy)# 11 match ip dst-ip 1.1.1.1 255.255.255.255
controller-1(config-policy)# 12 match ip dst-ip 1.1.1.2 255.255.255.255
controller-1(config-policy)# 13 match ip dst-ip 1.1.1.3 255.255.255.255
controller-1(config-policy)# 100 match ip dst-ip 1.1.0.0 255.255.0.0
controller-1(config-policy)# show policy ip-addr-optimization optimized-match
Optimized Matches :
100 ether-type 2048 dst-ip 1.1.0.0 255.255.0.0
Example 3: IPv6 prefix optimization. In this case, if a generic prefix exists, the specific addresses are not programmed in the TCAM.
controller-1(config)# policy ip-addr-optimization
controller-1(config-policy)# 25 match ip6 src-ip 2001::100:100:100:0 FFFF:FFFF:FFFF::0:0
controller-1(config-policy)# 30 match ip6 src-ip 2001::100:100:100:0 FFFF:FFFF::0
controller-1(config-policy)# show policy ip-addr-optimization optimized-match
Optimized Matches :
30 ether-type 34525 src-ip 2001::100:100:100:0 FFFF:FFFF::0
Example 4: Different subnet prefix optimization. In this case, addresses belonging to different subnets are optimized.
controller-1(config)# policy ip-addr-optimization
controller-1(config-policy)# 10 match ip dst-ip 2.1.0.0 255.255.0.0
controller-1(config-policy)# 11 match ip dst-ip 3.1.0.0 255.255.0.0
controller-1(config-policy)# show policy ip-addr-optimization optimized-match
Optimized Matches : 10 ether-type 2048 dst-ip 2.1.0.0 254.255.0.0

Transport Port Range and VLAN Range Optimization

The DANZ Monitoring Fabric (DMF) optimizes transport port ranges and VLAN ranges within a single match rule. Improvements in DMF now support cross-match rule optimization.

Show Commands

To view the optimized match rule, use the show command:

# show policy policy-name optimized-match

To view the configured match rules, use the following command:

# show running-config policy policy-name

Consider the following DMF policy configuration.

# show running-config policy p1
! policy
policy p1
action forward
delivery-interface d1
filter-interface f1
1 match ip vlan-id-range 1 4
2 match ip vlan-id-range 5 8
3 match ip vlan-id-range 7 16
4 match ip vlan-id-range 10 12

With the above policy configuration and before the DMF 8.5.0 release, the four match conditions would be optimized into the following TCAM rules:

# show policy p1 optimized-match
Optimized Matches :
1 ether-type 2048 vlan 0 vlan-mask 4092
1 ether-type 2048 vlan 4 vlan-mask 4095
2 ether-type 2048 vlan 5 vlan-mask 4095
2 ether-type 2048 vlan 6 vlan-mask 4094
3 ether-type 2048 vlan 16 vlan-mask 4095
3 ether-type 2048 vlan 8 vlan-mask 4088

However, with the cross-match rule optimizations introduced in this release, the rules installed in the switch would further optimize TCAM usage, resulting in:

# show policy p1 optimized-match
Optimized Matches :
1 ether-type 2048 vlan 0 vlan-mask 4080
1 ether-type 2048 vlan 16 vlan-mask 4095

A similar optimization technique applies to L4 ports in match conditions:

# show running-config policy p1
! policy
policy p1
action forward
delivery-interface d1
filter-interface f1
1 match tcp range-src-port 1 4
2 match tcp range-src-port 5 8
3 match tcp range-src-port 7 16
4 match tcp range-src-port 9 14

# show policy p1 optimized-match
Optimized Matches :
1 ether-type 2048 ip-proto 6 src-port 0 -16
1 ether-type 2048 ip-proto 6 src-port 16 -1

Switch Dual Management Port

 

Overview

When a DANZ Monitoring Fabric (DMF) switch disconnects from the Controller, the switch is taken out of the fabric, causing service interruptions. The dual management feature solves the problem by providing physical redundancy of the switch-to-controller management connection. DMF achieves this by allocating a switch data path port to be bonded with its existing management interface, thereby acting as a standby management interface. Hence, it eliminates a single-point failure in the management connectivity between the switch and the Controller.

Once an interface on a switch is configured for management, this configuration persists across reboots and upgrades until explicitly disabling the management configuration on the Controller.

Configure an interface for dual management using the CLI or the GUI.

Note: Along with the configuration on the Controller detailed below, dual management requires a physical connection in the same subnet as the primary management link from the data port to a management switch.

Configuring Dual Management Using the CLI

  1. From config mode, specify the switch to be configured with dual management, as in the following example:
    Controller-1(config)# switch DMF-SWITCH-1
    Controller-1(config-switch)#

    The CLI changes to the config-switch submode, to configure the specified switch.

  2. From config-switch mode, enter the interface command to specify the interface to be configured as the standby management interface:
    Controller-1(config-switch)# interface ethernet40
    Controller-1(config-switch-if)#

    The CLI changes to the config-switch-if submode, to configure the specified interface.

  3. From config-switch-if mode, enter the management command to specify the role for the interface:
    Controller-1(config-switch-if)# management
    Controller-1(config-switch-if)#
Note: When assigning an interface to a management role, no other interface-specific commands are honored for that interface (e.g., shut-down, role, speed, etc.).

Configuring Dual Management Using the GUI

  1. Select Fabric > Switches from the main menu.
    Figure 62. Controller GUI Showing Fabric Menu List
  2. Click on the switch name to be configured with dual management.
    Figure 63. Controller GUI Showing Inventory of Switches
  3. Click on the Interfaces tab.
    Figure 64. Controller GUI Showing Switch Interfaces
  4. Identify the interface to be configured as the standby management interface.
    Figure 65. Controller GUI Showing Configure Knob
  5. Click on the Menu button to the left of the identified interface, then click on Configure.
    Figure 66. Controller GUI Showing Interface Settings
  6. Set Use for Management Traffic to Yes. This action configures the interface to the standby management role.
    Figure 67. Use for Management Traffic
  7. Click Save.

Management Interface Selection Using the GUI

By default, the dedicated management interface serves as the management port, with the front panel data port acting as a backup only when the management interface is unavailable:
  • When the dedicated management interface fails, the front panel data port becomes active as the management port.
  • When the dedicated management interface returns, it becomes the active management port.
  • When the management network is undependable, this can lead to switch disconnects.

The Management Interface choice dictates what happens when the management interface returns after a failover. Make this selection using the GUI or the CLI.

Select Fabric > Switches.
Figure 68. Fabric Switches
Click on the switch name to be configured.
Figure 69. Switch Inventory
Select the Actions tab.
Figure 70. Switch Actions

Click the Configure Switch icon and choose the required Management Interface setting.

Figure 71. Controller GUI Showing Dual Management Settings

If selecting Prefer Dedicated Management Interface (the default), when the dedicated management interface goes down, the front panel data port becomes the active management port for the switch. When the dedicated management port comes back up, the dedicated management port becomes the active management port again, putting the front panel data port in an admin down state.

If selecting Prefer Current Interface, when the dedicated management interface goes down, the front panel data port still becomes the active management port for the switch. However, when the dedicated management port comes back up, the front panel data port continues to be the active management port.

Management Interface Selection Using the CLI

By default, the dedicated management interface serves as the management port, with the front panel data port acting as a backup only when the management interface is unavailable:
  • When the dedicated management interface fails, the front panel data port becomes active as the management port.
  • When the dedicated management interface returns, it becomes the active management port.

When the management network is undependable, this can lead to switch disconnects. The management interface selection choice dictates what happens when the management interface returns after a failover.

Controller-1(config)# switch DMF-SWITCH-1
Controller-1(config-switch)#management-interface-selection ?
prefer-current-interface Set management interface selection algorithm
prefer-dedicated-management-interface Set management interface selection algorithm (default selection)
Controller-1(config-switch)#

If selecting prefer-dedicated-management-interface (the default), when the dedicated management interface goes down, the front panel data port becomes the active management port for the switch. When the dedicated management port comes back up, the dedicated management port becomes the active management port again, putting the front panel data port in an admin down state.

If selecting prefer-current-interface, when the dedicated management interface goes down, the front panel data port still becomes the active management port for the switch. However, when the dedicated management port comes back up, the front panel data port continues to be the active management port.

Switch Fabric Management Redundancy Status

To check the status of all switches configured with dual management as well as the interface that is being actively used for management, enter the following command in the CLI:

Controller-1# show switch all mgmt-stats

Additional Notes

  • A maximum of one data-plane interface on a switch can be configured as a standby management interface.
  • The switch management interface ma1 is a bond interface, having oma1 as the primary link and the data plane interface as the secondary link.
  • The bandwidth of the data-plane interface is limited regardless of the physical speed of the interface. Arista Networks recommends immediate remediation when the oma1 link fails.

Controller Lockdown

Controller lockdown mode, when enabled, disallows user configuration such as policy configuration, inline configuration, and rebooting of fabric components and disables data path event processing. If there is any change in the data path, it will not be processed.

The primary use case for this feature is a planned management switch upgrade. During a planned management switch upgrade, DANZ Monitoring Fabric (DMF) switches disconnect from the Controller, and DMF policies are reprogrammed, disrupting traffic forwarding to tools. Enabling this feature before starting a management switch upgrade will not disrupt the existing DMF policies when DMF switches disconnect from the Controller, thereby forwarding traffic to the tools.

Note:DMF policies are reprogrammed when the switches reconnect to the DMF fabric when Controller lockdown mode is disabled after the management switch upgrade is completed. Controller lockdown mode is a special operation and should not be enabled for a prolonged period.
  • Operations such as switch reboot, Controller reboot, Controller failover, Controller upgrade, policy configuration, etc., are disabled when Controller lockdown mode is enabled.
  • The command to enable Controller lockdown mode, system control-plane-lockdown enable, is not saved to the running config. Hence, Controller lockdown mode is disabled after Controller power down/up. When failover happens with a redundant Controller configured, the new active Controller will be in Controller lockdown mode but may not have all policy information.
  • In Controller lockdown mode, copying the running config to a snapshot will not include the system control-plane-lockdown enable command.
  • The CLI prompt will start with the prefix LOCKDOWN when this feature is enabled.
  • Link up/down and other events during Controller lockdown mode are processed after Controller lockdown mode is disabled.
  • All the events handled by the switch are processed in Controller lockdown mode. For example, traffic is hashed to other members automatically in Controller lockdown mode if one LAG member fails. Likewise, all switch-handled events related to inline are processed in Controller lockdown mode.
Use the below commands to enable Controller lockdown mode. Only an admin user can enable or disable this feature.
Controller# configure
Controller(config)# system control-plane-lockdown enable
Enabling control-plane-lockdown may cause service interruption. Do you want to continue ("y" or "yes
" to continue):yes
LOCKDOWN Controller(config)#
To disable Controller lockdown mode, use the command below:
LOCKDOWN Controller(config)# system control-plane-lockdown disable
Disabling control-plane-lockdown will bring the fabric to normal operation. This may cause some
service interruption during the transition. Do you want to continue ("y" or "yes" to continue):
yes
Controller(config)#

CPU Queue Stats and Debug Counters

Switch Light OS (SWL) switches can now report their CPU queue statistics and debug counters. To view these statistics, use the DANZ Monitoring Fabric (DMF) Controller CLI. DMF exports the statistics to any connected DMF Analytics Node.

The CPU queue statistics provide visibility into the different queues that the switch uses to prioritize packets needing to be processed by the CPU. Higher-priority traffic is assigned to higher-priority queues.

The SWL debug counters, while not strictly limited to packet processing, include information related to the Packet-In Multiplexing Unit (PIMU). The PIMU performs software-based rate limiting and acts as a second layer of protection for the CPU, allowing the switch to prioritize specific traffic.

Note: The feature runs on all SWL switches supported by DMF.

Configuration

These statistics are collected automatically and do not require any additional configuration to enable.

To export statistics, configure a DMF Analytics Node. Please refer to the DMF User Guide for help configuring an Analytics Node.

Show Commands

Showing the CPU Queue Statistics

The following command shows the statistics for the CPU queues on a single switch.
controller-1> show switch FILTER-SWITCH-1 queue cpu
# SwitchOF Port Queue ID TypeTx Packets Tx BytesTx Drops Usage 
-|-----------------|-------|--------|---------|----------|---------|--------|---------------------------------|
1 FILTER-SWITCH-1 local 0multicast 830886 164100990 0lldp, l3-delivery-arp, tunnel-arp
2 FILTER-SWITCH-1 local 1multicast 00 0l3-filter-arp, analytics
3 FILTER-SWITCH-1 local 2multicast 00 0
4 FILTER-SWITCH-1 local 3multicast 00 0
5 FILTER-SWITCH-1 local 4multicast 00 0sflow
6 FILTER-SWITCH-1 local 5multicast 00 0
7 FILTER-SWITCH-1 local 6multicast 00 0l3-filter-icmp

There are a few things to note about this output:

  • The CPU's logical port is also known as the local port.
  • The counter values shown are based on the last time the statistics were cleared.
  • Different CPU queues may be used for various types of traffic. The Usage column displays the traffic that an individual queue is handling. Not every CPU queue is used.

The details token can be added to view more information. This includes the absolute (or raw) counter values, the last updated time, and the last cleared time.

Showing the Debug Counters

The following command shows all of the debug counters for a single switch:
controller-1> show switch FILTER-SWITCH-1 debug-counters 
#SwitchNameValue Description 
--|-----------------|-------------------------------|-------|-------------------------------------------|
1FILTER-SWITCH-1 arpra.total_in_packets1183182 Packet-ins recv'd by arpra
2FILTER-SWITCH-1 debug_counter.register79Number of calls to debug_counter_register
3FILTER-SWITCH-1 debug_counter.unregister21Number of calls to debug_counter_unregister
4FILTER-SWITCH-1 pdua.total_pkt_in_cnt 1183182 Packet-ins recv'd by pdua
5FILTER-SWITCH-1 pimu.hi.drop8 Packets dropped
6FILTER-SWITCH-1 pimu.hi.forward 1183182 Packets forwarded
7FILTER-SWITCH-1 pimu.hi.invoke1183190 Rate limiter invoked
8FILTER-SWITCH-1 sflowa.counter_request9325983 Counter requests polled by sflowa
9FILTER-SWITCH-1 sflowa.packet_out 7883772 Sflow datagrams sent by sflowa
10 FILTER-SWITCH-1 sflowa.port_features_update 22Port features updated by sflowa
11 FILTER-SWITCH-1 sflowa.port_status_notification 428 Port status notif's recv'd by sflowa

The counter values shown are based on the last time the statistics were cleared.

Add the name or the ID token and a debug counter name or ID to filter the output.

Add the details token to view more information. This includes the debug counter ID, the absolute (or raw) counter values, the last updated time, and the last cleared time.

Clear Commands

Clearing the Debug Counters

The following command will clear all of the debug counters for a single switch:
controller-1# clear statistics debug-counters

Clearing all Statistics

To clear both the CPU queue stats and the debug counters for every switch, use the following command:
controller-1# clear statistics
Note: This command is not only limited to switches. It will clear any clearable statistics for every device.

Analytics Export

The following statistics are automatically exported to a connected Analytics Node:

  • CPU queue statistics for every switch.
    Note: This does not include the statistics for queues associated with physical switch interfaces.
  • The PIMU-related debug counters. These are debug counters whose name begins with pimu. No other debug counters are exported.

DMF exports these statistics once every minute.

Note: The exported CPU queue statistics will include port number -2, which refers to the switch CPU's logical port.

Troubleshooting

Use the details with the new show commands to provide more information about the statistics. This information includes timestamps showing statistics collection time and the last time the statistics were cleared.

Use the redis-cli command to query the Redis server on the Analytics Node from the Bash shell on the DMF Controller to view the statistics successfully exported to the Analytics Node.

The following command queries for the last ten exported debug counters:
redis-cli -h analytics-ip -p 6379 LRANGE switch-debug-counters -10 -1
Likewise, to query for the last ten exported CPU queue stats:
redis-cli -h analytics-ip -p 6379 LRANGE switch-queue-stats -10 -1

Limitations

  • Only the CPU queue stats are exported to the Analytics Node. Physical interface queue stats are not exported.
  • Only the PIMU-related debug counters are exported to the Analytics Node. No other debug counters are exported.
  • Only SWL switches are currently supported. EOS switches are not supported.

Egress Filtering

Before the DANZ Monitoring Fabric (DMF) 8.6 release, DMF performed filtering at the filter port based on policy match rules. Thus, the system delivered the same traffic to all policy deliveries and tools associated with those delivery ports.

Egress Filtering introduces an option to send different traffic to each tool attached to the policy's delivery setting. It provides additional filtering at the delivery ports based on the egress filtering rules specified at the interface.

DMF supports egress filtering on the delivery and recorder node interfaces and supports configuring IPv4 and IPv6 rules on the same interface. Only packets with an IPv4 header are subject to the rules associated with the IPv4 token, while packets with an IPv6 header are only subject to the rules associated with the IPv6 token.

Egress Filtering applies to switches running SWL OS and does not apply to Extensible Operating System (EOS) switches.

Configuring Egress Filtering using the CLI

CLI Configuration

The egress filtering feature is configurable at the interface level. To enable it, run the egress-filtering command from the config-switch-if submode.
(config)# switch DCS-7050SX3-48YC8 
(config-switch)# interface ethernet18
(config-switch-if)# egress-filtering 
(config-switch-if-egress-filtering)#

In the config-switch-if-egress-filtering submode, enter the rule's sequence number. The number represents the sequence in which the rules are applied. The lowest sequence number will have the highest priority.

Tip: Leave gaps between the sequence numbers so that new rules can be added in the middle later, if necessary.
(config-switch-if-egress-filtering)# 1 
allow Forward traffic matching this rule
dropDrop traffic matching this rule
After the sequence number, specify the action of the rule. It can be either drop orallow.
(config-switch-if-egress-filtering)# 1 allow 
any ipv4 ipv6 
After specifying the action, enter the rule target traffic type like IPv4, IPv6, or any.
(config-switch-if-egress-filtering)# 1 allow 
any ipv4ipv6 

Any Traffic

The following illustrates a rule to allow all traffic on the interface, in this case, ethernet18.
(config-switch-if-egress-filtering)# 1 allow any
And a rule to drop all traffic on the interface, in this case, ethernet18.
(config-switch-if-egress-filtering)# 1 drop any

IPv4 Traffic

To allow or drop all IPv4 traffic on an interface, use the following commands:

Drop
(config-switch-if-egress-filtering)# 1 drop ipv4
(config-switch-if-egress-filtering-ipv4)#
Allow
(config-switch-if-egress-filtering)# 1 allow ipv4
(config-switch-if-egress-filtering-ipv4)#
The following other options are available in the submode config-switch-if-egress-filtering-ipv4.
DMF supports the following qualifiers for IPv4 traffic filtering along with the IP address, port, and VLAN ranges.
(config-switch-if-egress-filtering-ipv4)# 
dst-ipdst-portinner-vlanip-protoouter-vlan-rangesrc-ip-rangesrc-port-range
dst-ip-rangedst-port-rangeinner-vlan-rangeouter-vlansrc-ipsrc-port
The following are examples of using the different options.
(config-switch-if-egress-filtering-ipv4)# dst-ip 12.123.123.39
(config-switch-if-egress-filtering-ipv4)# ip-proto 6
(config-switch-if-egress-filtering-ipv4)# dst-port 13
(config-switch-if-egress-filtering-ipv4)# src-port-range min 12 max 23
(config-switch-if-egress-filtering-ipv4)# inner-vlan 23
(config-switch-if-egress-filtering-ipv4)# outer-vlan 45
(config-switch-if-egress-filtering-ipv4)# src-ip 12.123.145.39

IPv6 Traffic

To allow or drop all IPv6 traffic on an interface, use the following commands:

Drop
(config-switch-if-egress-filtering)# 1 drop ipv6
(config-switch-if-egress-filtering-ipv6)#
Allow
(config-switch-if-egress-filtering)# 1 allow ipv6
(config-switch-if-egress-filtering-ipv6)#
The following options are available in the submode config-switch-if-egress-filtering-ipv6 for IPv6 traffic filtering.
(config-switch-if-egress-filtering-ipv6)# 
dst-portinner-vlanip-protoouter-vlan-rangesrc-port-range
dst-port-rangeinner-vlan-rangeouter-vlansrc-port

Show Commands

The following show command displays the Egress Filtering enabled device name, the information about the specified rule under the Entry key column, and the rule's action under the Entry value column. DMF uses the table to communicate the egress filtering rules with the device. The data in this table is primarily intended for debugging and communication purposes.
# show switch all table egress-flow-1 
# Egress-flow-1 Device nameEntry key Entry value
-|-------------|------------------|---------------------------------------------------------------|----------------------------------------------|
1 0 DCS-7050SX3-48YC8Priority(1000), Port(13), EthType(2048), Ipv4Src(12.123.123.12) Name(__Rule1__), Data([0, 0, 0, 0]), NoDrop()
2 1 DCS-7050SX3-48YC8Priority(0), Port(13) Name(__Rule0__), Data([0, 0, 0, 0]), Drop()

If any egress filtering warnings are present, they can be seen by running the show fabric warnings egress-filtering-warnings command. The output lists the switch name and the interface name on which an egress filtering warning is present, with a detailed message.

Configuration Validation Messages

The following are examples of validation failure messages and their potential causes.

A validation exception occurs when configuring an egress filtering rule without specifying EtherType.
Validation failed: EtherType is mandatory for egress filtering rule
Similarly, there is another configuration validation for action, which is mandatory for egress filtering rules. Each rule can have a maximum of two ranges, and when exceeded, a validation failure occurs.
Validation failed: A rule cannot contain more than 2 configured ranges
DMF does not support configuring individual port values and port ranges of the same qualifier in the same rule, and configuring a source port and its range results in a validation failure.
Validation failed: Source port and its ranges are not supported together
A validation failure occurs if any specified ranges have a higher minimum value than the maximum value. For example, the specified inner VLAN minimum exceeds the maximum value. Similarly, validation failures may occur for ranges.
Validation failed: Inner VLAN min cannot be greater than inner VLAN max
The ip-proto setting is mandatory when specifying any port number, source, or destination. Specifying the source or destination port without ip-proto causes a validation failure.
Validation failed: IP protocol number is mandatory for source port
Validation failed: IP protocol number is mandatory for destination port
A validation failure occurs if any unsupported IP protocol number is a source or destination port specified without ip-proto. DMF only supports TCP(6), UDP(17), and SCTP(132) protocol numbers for port qualifiers.
Validation failed: IP protocol number <protocol number> is unsupported for source port
Validation failed: IP protocol number <protocol number> is unsupported for destination port

Configuring Egress Filtering using the GUI

Perform the following steps to configure Egress Filtering.
  1. Navigate to the Fabric > Interfaces page.
  2. Select Configure … from the menu.
    Figure 72. Fabric > Interfaces > Configure
  3. Select Egress Filtering to add the egress filtering rules.
    Figure 73. Edit Interface - Egress Filtering
  4. To add a new rule, select the + icon to add a new Egress Filtering Rule.
    Figure 74. Add New Rule
    Enter the primary information for the rule, including:
    • Sequence
    • Action: Allow (default), Drop
    • Ethertype: Any (default), IPv4, IPv6
    • IP Protocol
    When Ethertype is set to IPv4 or IPv6, the Source, Destination and VLANs steps are enabled.
    Figure 75. Configure Egress Filtering Rule
    In Source, enter the following information:
    • Source IP Address Type: Single, Range
    • Source Ports: None, Single, Range

    Similarly, in Destination, enter the destination IP Address and ports.

    Note: Specify the IP Protocol under Traffic when configuring source or destination ports.
    In VLANs, enter the following information:
    • Inner VLANs: None, Single, Range
    • Outer VLANs: None, Single, Range
  5. Select Append to add the rule.
    Use the + and - icons to add or remove the rules, as required.
    Figure 76. Edit Interface
  6. Select Save to complete the interface configuration.

The following UI menus are available to configure Egress Filtering:

  • In Fabric > Switches > Switch Detail > Interfaces, select Configure.

  • In Monitoring > Interfaces.

  • In Recorder Node Interfaces Monitoring > Recorder Nodes.

Syslog Messages

There are no Syslog messages relevant to the Egress Filtering feature.

Troubleshooting

When a tool connected to a delivery interface configured with egress filtering rules receives an unexpected packet or does not receive the expected packet, use the following steps to troubleshoot the issue.
  1. Check the show running-config command output to see if the egress filtering rules are configured correctly under that particular interface.
  2. Verify the show switch switch-name table egress-flow-1 command output.
    It will display the port number of the interface for the configured egress filtering rules, its qualifiers as Entry key, the Entry value action of Drop or NoDrop, and a default drop rule for that port number with priority 0.
    # show sw all table egress-flow-1 
    # Egress-flow-1 Device nameEntry key Entry value 
    -|-------------|-----------------|---------------------------------------------------------------|---------------------------------------------|
    1 0 DCS-7050SX3-48YC8 Priority(1000), Port(13), EthType(2048), Ipv4Src(12.123.123.12) Name(__Rule1__), Data([0, 0, 0, 0]), NoDrop()
    2 1 DCS-7050SX3-48YC8 Priority(0), Port(13) Name(__Rule0__), Data([0, 0, 0, 0]), Drop()
  3. Use the following command to verify the same information from a switch (e.g., DCS-7050SX3-48YC8).
    root@DCS-7050SX3-48YC8:~# ofad-ctl gt egr_flow1
    GENTABLE : egr_flow1
    GENTABLE ID : 0x0019
    Table count: matched/lookup : 0/0
    Entry count/limit : 2/1024
    guaranteed max: 512, potential max: 1024
    priority 0 out_port 13drop true0p/0b eid 17
    priority 1000 out_port 13 eth_type 0x800/0xffff ipv4_src 12.123.123.12/255.255.255.255drop false0p/0b eid 22
  4. Use the show fabric warnings egress-filtering-warning command to view any egress filtering warnings.
    (config)# show fabric warnings egress-filtering-warning 
    ~~~~~~~~~~~~~~~~~~~~~~~~~ Egress filtering warnings ~~~~~~~~~~~~~~~~~~~~~~~~~
    # Switch IF Name Warning message
    -|-----------------|-----------|-----------------------------------------------------|
    1 DCS-7050SX3-48YC8 ethernet18Supported only on delivery or recorder node interfaces
    2 DCS-7280SR-48C6 Ethernet8 Egress filtering feature is not supported on EOS switches

Limitations

  • Egress filtering supports only 500 rules per interface. A validation failure occurs when exceeding this limit.
    Validation failed: Only 500 egress filtering rules are supported per interface
  • DMF does not support egress filtering on MLAG delivery interfaces.
  • IPv6 IP address filtering is not allowed, and a validation failure occurs.
    Validation failed: IPv6 destination IP address is not supported
    Validation failed: IPv6 source IP address is not supported

Integrating vCenter with DMF

This chapter describes integrating VMware vCenter with the DANZ Monitoring Fabric (DMF) and monitoring Virtual Machines (VM) in the vCenter.

Overview

The DANZ Monitoring Fabric (DMF) allows the integration and monitoring of VMs in a VMware vCenter cluster. After integrating a vCenter with the DMF fabric, use DMF policies to select different types of traffic from specific VMs and apply managed services, such as deduplication or header slicing, to the selected traffic.

Currently, DMF supports the following versions of VMware vCenter for monitoring:

  • vCenter Server 6.5.0
  • vCenter Server 6.7.0
  • vCenter Server 7.0.0
  • vCenter Server 8.0.0

The DANZ Monitoring Fabric provides two options to monitor a VMware vCenter cluster:

  • Monitoring using span ports: This method monitors VMware vCenter clustering using a separate monitoring network. The advantage of this configuration is that it has no impact on the production network and has a minimal effect on compute node CPU performance. However, in this configuration, each compute node must have a spare NIC to monitor traffic.

    The following figure illustrates the topology used for local SPAN configuration:

    Figure 1. Mirroring on a Separate SPAN Physical NIC (SPAN)
  • Monitoring using ERPAN/L2GRE tunnels: Use Remote SPAN (ERSPAN) to monitor VMs running on the ESX hosts within a vCenter instance integrated with DMF. ERSPAN monitors traffic to and from VMs anywhere in the network and does not require a dedicated physical interface card on the ESX host. However, ERSPAN can affect network performance, especially when monitoring VMs connected to the DMF Controller over WAN links or production networks with high utilization.

Using SPAN to Monitor VMs

This section describes the configuration required to integrate the DANZ Monitoring Fabric (DMF) Controller with one or more vCenter instances and to monitor traffic from VMs connected to the VMware vCenter after integration.

The following figure illustrates the topology required to integrate a vCenter instance with the monitoring fabric and deliver the traffic selected by DMF policies to specified delivery ports connected to different monitoring tools.

Figure 2. VMware vCenter Integration and VM Monitoring

When integrated with vCenter, the DMF Controller uses Link Layer Discovery Protocol (LLDP) to automatically identify the available filter interfaces connected to the vCenter instance.

Using ERSPAN to Monitor VMs

Use Remote SPAN (ERSPAN) to monitor VMs running on the ESX hosts within a VMware vCenter instance integrated with the DANZ Monitoring Fabric (DMF). ERSPAN monitors traffic to and from VMs anywhere in the network and does not require a dedicated physical interface card on the ESX host. However, ERSPAN can affect network performance, especially when monitoring VMs connected to the DMF Controller over WAN links or production networks with high utilization.
Figure 3. Using ERSPAN to Monitor VMs

The procedure for deploying ERSPAN is similar to SPAN but requires an additional step to define the tunnel endpoints used on the DMF network to terminate the ERSPAN session.

Configuration Summary for vCenter Integration

The following procedure summarizes the high-level steps required to integrate the vCenter and monitor traffic to or from selected VMs:

  1. (For ERSPAN only) Define the tunnel endpoint.
    Identify a fabric interface connected to the vCenter instance for the tunnel endpoint by entering the tunnel-endpoint command in config mode. To define the tunnel endpoint, refer to the Defining a Tunnel Endpoint section.
  2. Provide the vCenter address and credentials.

    The vSphere extension on the DANZ Monitoring Fabric (DMF) Controller discovers an inventory of VMs and the associated details for each VM.

  3. Select the VMs to monitor on the DMF Controller.

    The DMF Controller uses APIs to invoke the vSphere vCenter instance.

    vSphere calls the DVS to create a SPAN session. The preferred option is to SPAN on a separate physical NIC. However, the option exists to also use ERSPAN by tunneling to the remote interface.

  4. Create policies in DMF to filter, replicate, process, and redirect traffic to tools.

    When using tunnels with ERSPAN, DMF terminates the tunnels using the specified tunnel endpoint. A DMF policy for monitoring VM traffic using a SPAN session must include the required information regarding the vCenter configuration. All match conditions, including User-Defined ofFsets (UDFs), are supported.

    The policy for selecting VM traffic to monitor is similar to other DMF policies, except that the filtering interfaces are orchestrated automatically (filter interfaces are auto-discovered and cannot be specified manually). All managed-service actions are supported.

Defining a Tunnel Endpoint

Predefine the tunnel endpoints for creating tunnels when monitoring VMware vCenter traffic using either the GUI or the CLI.

GUI Procedure

To manage tunnel endpoints in the GUI, select Monitoring > Tunnel Endpoints.

Figure 4. Monitoring > Tunnel Endpoints

This page lists the tunnel endpoints that are already configured and provides information about each endpoint.

To create a new tunnel endpoint, click the provision (+) control in the Tunnel Endpoints table.
Figure 5. Create Tunnel Endpoint
To create the tunnel endpoint, enter the following information and click Save:
  • Name: Type a descriptive name for the endpoint.
  • Switch: Select the DMF switch from the selection list for the configured endpoint interface.
  • Interface: Select the interface from the selection list for the endpoint.
  • Gateway: Type the address of the default gateway.
  • IP Address: Type the endpoint IP address.
  • Mask: Type the subnet mask for the endpoint.

CLI Procedure

To configure a tunnel endpoint using the CLI, enter the tunnel-endpoint command from config mode using the following syntax:
controller-1(config)# tunnel-endpoint <name> switch <switch> <interface> ip-address <address> mask
<mask> gateway <address>
For example, the following command defines ethernet24 on F-SWITCH-1 as a tunnel endpoint named OSEP1:
controller-1(config)# tunnel-endpoint ERSPAN switch CORE-SWITCH ethernet7 ip-address 172.27.1.1
mask 255.255.255.0 gateway 172.27.1.2

The IP address assigned to this endpoint is 172.27.1.1, and the next hop address for connecting to the vCenter via ERSPAN is 172.27.1.2.

Using the GUI to Integrate a vCenter Instance

To integrate a vCenter instance with DANZ Monitoring Fabric (DMF) to begin monitoring VMs, select Integration > vCenter from the DMF menu bar.
Figure 6. Integration > vCenter

This page displays information about the vCenter instances integrated with DMF. To add a vCenter instance for integration with DMF, perform the following steps:

  1. Click the provision control (+) in the table.
    Figure 7. Create vCenter: Info
  2. Type an alphanumeric identifier for the vCenter instance, and (optionally) add a description in the fields provided.
  3. Identify the vCenter hostname to be integrated.
  4. Enter the vCenter username and password for authenticating to the vCenter instance.

    These credentials are used by the DMF Controller when communicating with the vCenter host.

  5. Click Next.
    Figure 8. Create vCenter: Options (page 2)
    This page defines the mirror type as SPAN or ERSPAN. When selecting ERSPAN, the following additional fields complete the ERSPAN configuration:
    • Cluster Tunnel Endpoints (optional)
    • Default Tunnel Endpoint (required)
    • Sampling Rate (optional)
    • Mirrored Packet Length (optional)
    • Create Wildcard Tunnels(optional)

    Use Cluster Tunnel Endpoints to specify a common tunnel endpoint for all the ESXi hosts in the cluster. Use Default Tunnel Endpoint to specify a common tunnel endpoint for all the ESXi hosts regardless of the cluster. When configuring both cluster and default tunnel endpoints, all hosts in clusters form tunnels using the cluster-specific configuration, and all the other hosts that are not a part of any cluster use the default configuration to form tunnels.

  6. Click Next.
    Figure 9. Create vCenter/VMs
  7. To add a VM for monitoring, click the provision control (+).
    Figure 10. Configure vCenter VM

    Select VMs from the selection list after integrating vCenter and discovering the VMs, or manually add the VM hostname.

  8. After identifying the VM to monitor, click Append.
  9. On the VMs of the Create vCenter dialog, click Save.

Using a vCenter Instance as the Traffic Source in a DMF Policy

To identify a vCenter instance integrated with the DANZ Monitoring Fabric (DMF) Controller as the traffic source for a DMF policy, click the VMware vCenter tab on the Integration page. Locate the vCenter instance name.
Figure 11. VMware vCenter Name

Proceed to the Monitoring > Policies page.

Figure 12. DMF Policies
Click the + Create Policy button to add a policy.
Figure 13. Create Policy
Enter a Name and Description for the vCenter policy. From the Traffic Sources column, select + Add Ports(s).
Figure 14. Traffic Sources - Add Ports
Click vCenters.
Figure 15. vCenters
Available vCenter instances display. Select the required vCenter instance which then appears in the Selected traffic Sources panel.
Figure 16. vCenter Instance
Click Add 1 Source. The vCenter instance appears in the Traffic Sources column.
Figure 17. vCenter Traffic Sources
From the Destination Tools column, select + Add Ports(s). Select the interface under Destination Tools.
Figure 18. Destination Tools - Add Ports
Click the Add 1 Interface button. The interface appears under the Destination Tools column.
Figure 19. Add Interface
Click Create Policy. The new vCenter policy appears in the DMF Policies dashboard.
Figure 20. Create vCenter Policy

Using the CLI to Integrate a vCenter Instance

Refer to the following topics to monitor VMs using Encapsulated Remote SPAN (ERSPAN) or Switch Port Analyzer (SPAN) on a locally connected vCenter instance and VMs on a second locally connected vCenter instance.

VMs using ERSPAN on a Locally Connected vCenter Instance

To configure the DANZ Monitoring Fabric Controller for monitoring VMs using ERSPAN on a locally connected vCenter instance, perform the following steps:

  1. Add the vCenter instance details by entering the following commands.
    controller-1(config)# vcenter vc-1
    controller-1(config-vcenter)# host-name 10.8.23.70
    controller-1(config-vcenter)# password 094e470e2a121e060804
    controller-1(config-vcenter)# user-name root
  2. Specify the mirror type by entering the following commands.
    controller-1(config-vcenter)# mirror-type erspan
    controller-1(config-vcenter)# sampling-rate 60
    controller-1(config-vcenter)# mirrored-packet-length 60

    The sampling-rate and mirrored-packet-length commands are optional.

  3. ERSPAN mirroring requires a tunnel endpoint configuration. Use the cluster command to specify a common tunnel endpoint for all the ESXi hosts in the cluster. Use the default-tunnel-endpoint command to specify a common tunnel endpoint for all the ESXi hosts regardless of the cluster. When using both the cluster and default-tunnel-endpoint commands, all hosts in clusters form tunnels using the cluster-specific configuration, and all the other hosts not a part of any cluster use the default configuration to form tunnels.
    controller-1(config-vcenter)# default-tunnel-endpoint VCEP1
    controller-1(config-vcenter)# cluster <cluster-name> tunnel-endpoint <tunnel-endpoint-name>

    Using the tab auto-complete feature with the cluster suggests existing cluster names associated with the vCenter.

  4. Add a static route to the default or cluster tunnel-endpoint in each ESXI host.
    esxcli network ip route ipv4 add -n <network> -g <gateway> 
    Example: esxcli network ip route ipv4 add -n 192.168.200.0/24-g 192.168.150.1 
  5. Add the VMs to monitor by entering the following commands.
    controller-1(config-vcenter)# vm-monitoring
    controller-1(config-vcenter-vm-monitoring)# vm vm-2001
    controller-1(config-vcenter-vm-monitoring)# vm vm-2002
  6. Receive-only GRE tunnel-interfaces will be auto-configured under switch for all the hosts belonging to vc-1 that have a route to the default or cluster tunnel-endpoint.
    ! switch
    switch DMF-RU34
    mac 94:8e:d3:fd:6b:96
    !
    gre-tunnel-interface vcenter-abd08a18
    direction receive-only
    local-ip 192.168.200.254 mask 255.255.255.0 gateway-ip 192.168.200.1
    origination vc8--interface
    parent-interface ethernet55
    remote-ip 192.168.150.27
    gre-key-decap 33554432
    !
    gre-tunnel-interface vcenter-abd08a37
    direction receive-only
    local-ip 192.168.200.254 mask 255.255.255.0 gateway-ip 192.168.200.1
    origination vc8--interface
    parent-interface ethernet55
    remote-ip 192.168.150.28
    gre-key-decap 33554432
    !
    gre-tunnel-interface vcenter-abd08a56
    direction receive-only
    local-ip 192.168.200.254 mask 255.255.255.0 gateway-ip 192.168.200.1
    origination vc8--interface
    parent-interface ethernet55
    remote-ip 192.168.50.29
    gre-key-decap 33554432
  7. Enter the show running-config vcenter command to view the vCenter configuration.
    controller-1# show running-config vcenter
    ! vcenter
    vcenter vc-1
    hashed-password 752a3a3211040e0200090409090611
    host-name 10.8.23.70
    mirror-type erspan
    mirrored-packet-length 60
    sampling-rate 60
    user-name This email address is being protected from spambots. You need JavaScript enabled to view it.
    !
    vm-monitoring
    vm vm-2001
    vm vm-2002
  8. Configure the policies specifying the match rules and delivery interfaces.
    controller-1(config)# policy dmf-policy-with-vcenter
    controller-1(config-policy)# action forward
    controller-1(config-policy)# filter-vcenter vc-1
    controller-1(config-policy)# 1 match any
    controller-1(config-policy)# delivery-interface TOOL-PORT-03
  9. Enter the show running-config policy command to view the automatically assigned filter interfaces.
    controller-1# show running-config policy dmf-policy-with-vcenter
    ! policy
    policy dmf-policy-with-vcenter
    action forward
    delivery-interface TOOL-PORT-03
    filter-interface DMF-RU34-filter-vcenter-abd08a18 vc-1--interface
    filter-interface DMF-RU34-filter-vcenter-abd08a37 vc-1--interface
    filter-interface DMF-RU34-filter-vcenter-abd08a56 vc-1--interface
    filter-vcenter vc-1
    1 match any
    All the host tunnels belonging to vc-1 will become the filter interfaces. If new hosts are added, deleted, or modified, policies will be recomputed with the new interfaces.

VMs using SPAN on a Locally Connected vCenter Instance

To configure the DANZ Monitoring Fabric Controller for monitoring VMs using SPAN on a locally connected vCenter instance, perform the following steps:
  1. Add the vCenter instance details by entering the following commands.
    controller-1(config)# vcenter vc-1
    controller-1(config-vcenter)# host-name 10.8.23.70 
    controller-1(config-vcenter)# password 094e470e2a121e060804
    controller-1(config-vcenter)# user-name root
  2. Specify the mirror type by entering the following commands.
    controller-1(config-vcenter)# mirror-type span
    controller-1(config-vcenter)# sampling-rate 60 
    controller-1(config-vcenter)# mirrored-packet-length 60
    The sampling-rate and mirrored-packet-length commands are optional.
  3. Add the VMs to monitor by entering the following commands.
    controller-1(config-vcenter)# vm-monitoring 
    controller-1(config-vcenter-vm-monitoring)# vm vm-2001 
    controller-1(config-vcenter-vm-monitoring)# vm vm-2002
  4. To view the vCenter configuration, enter the show running-config vcenter command as in the following example.
    controller-1# show running-config vcenter 
    ! vcenter
    vcenter vc-1
    hashed-password 752a3a3211040e0200090409090611
    host-name 10.8.23.70
    mirror-type span
    mirrored-packet-length 60
    sampling-rate 60
    user-name This email address is being protected from spambots. You need JavaScript enabled to view it.
    !
    vm-monitoring
    vm vm-2001
    vm vm-2002
  5. Configure the policies specifying the match rules and delivery interfaces.
    controller-1(config)# policy dmf-policy-with-vcenter
    controller-1(config-policy)# action forward
    controller-1(config-policy)# filter-vcenter vc-1
    controller-1(config-policy)# 1 match any
    controller-1(config-policy)# delivery-interface TOOL-PORT-03
    Note: LLDP automatically learns the filter interfaces. All the hosts belonging to vc-1 that have physical connections to DMF switches become the filter interfaces. If new connections are made later (or existing connections are changed), policies will be recomputed with the new interfaces.
  6. To view the automatically assigned filter interfaces, enter the show running-config policy command.
    controller-1# show running-config policy dmf-policy-with-vcenter
    ! policy
    policy dmf-policy-with-vcenter
    action forward
    delivery-interface TOOL-PORT-03
    filter-interface vc-filter-1 origination vc-10-9-19-7--filter-interface
    filter-interface vc-filter-3 origination vc-10-9-19-7--filter-interface
    filter-vcenter vc-1
    1 match any

VMs on a Second Locally Connected vCenter Instance

To configure the DMF Controller for monitoring VMs on a second locally connected vCenter instance, perform the following steps:
  1. Add the VMs to monitor and configure the DMF policies to specify the match rules and delivery interfaces.
    (config)# vcenter vc-2
    (config-vcenter)# host-name 10.8.23.71
    (config-vcenter)# password 094e470e2a121e060804
    (config-vcenter)# user-name root
    (config-vcenter)# mirror-type span | erspan
    (config-vcenter)# sampling-rate 60
    (config-vcenter)# mirrored-packet-length 60
    (config-vcenter)# vm-monitoring
    (config-vcenter-vm-monitor)# vm vm-1001
    (config-vcenter-vm-monitor)# vm vm-1002
  2. Configure the policy for the second vCenter instance.
    (config)# policy dmf-policy-with-vcenter-2
    (config-policy)# vcenter vc-2
    (config-policy)# 1 match any
    (config-policy)# delivery-interface TOOL-PORT-02

Using the GUI to View vCenter Configuration

After integrating a vCenter instance, click the link in the Name column in the vCenter table to view vCenter activity.
Figure 21. VMware vCenter Instance Name

DANZ Monitoring Fabric (DMF) displays the vCenter Info page.

Figure 22. VMware vCenter Configuration
The Info page displays information about the configuration of the vCenter instance. To view information about vCenter resources, scroll down to the following sections:
  • Hosts
  • Virtual Switches
  • Physical Connections
  • Virtual Machines
  • Network Host Connection Details
Figure 23. Hosts, Virtual Switches, and Physical Connections
Figure 24. Virtual Machines and Network Host Connection Details

Using the CLI to View vCenter Configuration

To view the vCenter configuration in the CLI, use the show vcenter command, as in the following examples:
controller-1# show vcenter
#vCenter Name vCenter Host Name or IP Last vCenter Update Time Detail State vSphere Version
--|------------|-----------------------|------------------------------|----------------------------|---------------|
1vc-10-9-0-75 10.9.0.75 2017-09-0918:02:35.980000 PDTConnected and authenticated. 6.5.0
2vc-10-9-0-76 10.9.0.76 2017-09-0918:02:36.488000 PDTConnected and authenticated. 6.5.0
3vc-10-9-0-77 10.9.0.77 2017-09-0918:02:35.908000 PDTConnected and authenticated. 6.0.0
4vc-10-9-0-78 10.9.0.78 2017-09-0918:02:33.507000 PDTConnected and authenticated. 6.5.0
5vc-10-9-0-79 10.9.0.79 2017-09-0918:02:32.248000 PDTConnected and authenticated. 6.5.0
6vc-10-9-0-80 10.9.0.80 2017-09-0918:02:32.625000 PDTConnected and authenticated. 6.0.0
7vc-10-9-0-81 10.9.0.81 2017-09-0918:02:34.672000 PDTConnected and authenticated. 6.0.0
8vc-10-9-0-82 10.9.0.82 2017-09-0918:02:33.008000 PDTConnected and authenticated. 6.0.0
9vc-10-9-0-83 10.9.0.83 2017-09-0918:02:30.011000 PDTConnected and authenticated. 6.0.0
10 vc-10-9-0-84 10.9.0.84 2017-09-0918:02:33.024000 PDTConnected and authenticated. 6.5.0
11 vc-10-9-0-85 10.9.0.85 2017-09-0918:02:34.827000 PDTConnected and authenticated. 6.0.0
12 vc-10-9-0-86 10.9.0.86 2017-09-0918:02:35.164000 PDTConnected and authenticated. 6.0.0
13 vc-10-9-0-87 10.9.0.87 2017-09-0918:02:38.042000 PDTConnected and authenticated. 6.5.0
14 vc-10-9-0-88 10.9.0.88 2017-09-0918:02:37.212000 PDTConnected and authenticated. 6.0.0
15 vc-10-9-0-89 10.9.0.89 2017-09-0918:02:33.436000 PDTConnected and authenticated. 6.5.0
controller-1#

controller-1# show vcenter vc-10-9-0-75
#vCenter Name vCenter Host Name or IP Last vCenter Update Time Detail State vSphere Version
--|------------|-----------------------|------------------------------|----------------------------|---------------|
1vc-10-9-0-75 10.9.0.75 2017-09-0918:02:44.698000 PDTConnected and authenticated. 6.5.0
controller-1#

controller-1# show vcenter vc-10-9-0-75 detail
vCenter Name : vc-10-9-0-75
vCenter Host Name or IP : 10.9.0.75
Last vCenter Update Time : 2017-09-09 18:02:49.463000 PDT
Detail State : Connected and authenticated.
vSphere Version : 6.5.0
controller-1#

controller-1# show vcenter vc-10-9-0-75 error
vCenter Name : vc-10-9-0-75
vCenter Host Name or IP : 10.9.0.75
State : connected
Detail State : Connected and authenticated.
Detailed Error Info :
controller-1#

Integrating vCenter with DMF using Mirror Stack

DANZ Monitoring Fabric (DMF) vCenter integration supports mirroring from vCenter hosts using the default TCP/IP stack. However, this can result in traffic drops and affect production traffic since mirror traffic can conflict with production traffic. DMF vCenter integration with Mirror Stack provides the functionality to use the mirror TCP/IP stack for mirror sessions. Mirror stack in the ESXi host allows decoupling the traffic and keeps the production traffic unaffected.

vCenter configurations in DMF will use a mirror stack by default; however, if upgrading from previous DMF versions, the already configured vCenter will be set to use the default TCP/IP stack.

Platform Compatibility

vCenter integration with Mirror Stack requires an extra NIC on the ESXi host with following versions:
  • vCenter Server 7.0.x
  • vCenter Server 8.0.x

vCenter Configuration

DMF vCenter integration with Mirror Stack requires a mirror stack configuration on the ESXi host and vCenter.

Perform the following steps to configure the mirror stack on vCenter.

Repeat the steps for each ESXi host containing VMs to be monitored.

  1. Enable the mirror stack in the ESXi host if not already enabled.
    1. Use the esxcli network ip netstack list command to review the current network stacks.
      [root@ESX33:~] esxcli network ip netstack list
      defaultTcpipStack
       Key: defaultTcpipStack
       Name: defaultTcpipStack
       State: 4660
      
      mirror
       Key: mirror
       Name: mirror
       State: 4660
      To view the TCP/IP configuration from vCenter UI, navigate to Host > Configure > TCP/IP.
      Figure 25. TCP/IP Configuration
    2. If the mirror stack is not configured, use the esxcli network ip netstack set -N mirror command to enable it.
      Note: The mirror setting is required to enable the Mirror TCP/IP stack and DMF integration.
  2. From vCenter create a VMkernel adapter with the mirror stack.
    Figure 26. VMKernel Network Adapter

    Select the appropriate network using the Browse option.

    Figure 27. Browse
    Click Next and select Port properties and choose mirror.
    Figure 28. Port Properties - Mirror
    Figure 29. Mirror Stack Added

    Add the IPv4 address and the Default gateway address according to your local network requirements.

    Figure 30. IP Address and Gateway Address
    Click Next.
    Figure 31. VMkernel Adapters
  3. Based on the networking requirements, configure the default gateway of the mirror stack in the host's TCP/IP configuration or a static route entry in the ESXi host to the DMF tunnel endpoint. The following example illustrates adding a static route entry to the DMF tunnel endpoint.
    [root@ESX33:~] esxcli network ip route ipv4 add -n 192.168.200.0/24-g 192.168.150.1 -N mirror
    
    [root@ESX31:~] esxcli network ip route ipv4 list -N mirror
    NetworkNetmaskGatewayInterfaceSource
    ------------------------------------------------------
    192.168.150.0255.255.255.00.0.0.0vmk2 MANUAL
    192.168.200.0255.255.255.0192.168.150.1vmk2 MANUAL
  4. Navigate to Configure > TCP/IP Configuration > Select mirror stack > IPv4 Routing Table to view the routes.
    Figure 32. TCP/IP Configuration & IPv4 Routing Table
    Figure 33. Virtual Switch

Configuring DMF

Using the CLI

From the DMF Controller configure the TCP/IP stack using the tcp-ip-stack option in the vCenter config. The default and recommended value is mirror-stack.
dmf-controller-1(conf)# vcenter vc8
dmf-controller-1(config-vcenter)# tcp-ip-stack
default-stack mirror-stack
dmf-controller-1(config-vcenter)# tcp-ip-stack mirror-stack

Using the GUI

To configure TCP/IP Stack, navigate to Integration > VMware vCenter. While adding or editing a vCenter configuration, select the appropriate choice using TCP/IP Stack. Default Stack and Mirror Stack are the options.

Figure 34. Create vCenter TCP/IP Stack
Attention: Encapsulated Remote mirroring with Default Stack is not recommended. Use Mirror Stack for optimal performance.

Show Commands

Use the show running-config command to view the tcp-ip-stack configuration.
Note: If mirror-stack is configured, it will only show when using the details token.
dmf-controller-1(config-vcenter)# show running-config vcenter vc8 details

! vcenter
vcenter vc8
default-tunnel-endpoint r34-lag-leaf1b
hashed-password <hashed-password>
host-name <ip-address>
mirror-type encapsulated-remote
tcp-ip-stack mirror-stack
user-name This email address is being protected from spambots. You need JavaScript enabled to view it.
View the existing mirror stack NICs and IPs of the host using the show vcenter vCenter name inventory command.
Note: v8 is an example vCenter name.
dmf-controller-1# show vcenter vc8 inventory
# vCenter ESXi Host Host DNS Name Cluster Product Name Hardware Model CPU Usage (%) Memory Usage (%) Virtual switches Mirror Stack VMkernel Adapter VMkernel Adapter IP Address
-|-------|-------------|-----------------------------------|---------------|--------------------------------|--------------|-------------|----------------|----------------|-----------------------------|---------------------------|
1 vc8 10.240.166.27 ESX27.qa.bsn.sjc.aristanetworks.com BSN-NSX-1 VMware ESXi 8.0.2 build-22380479 PowerEdge R430 2 15 3vmk1192.168.60.27
2 vc8 10.240.166.28 ESX28.qa.bsn.sjc.aristanetworks.com BSN-NSX-2 VMware ESXi 8.0.2 build-223804790 44
3 vc8 10.240.166.29 ESX29.qa.bsn.sjc.aristanetworks.com EdgeVMware ESXi 8.0.0 build-20513097 PowerEdge R430 4 23 3
4 vc8 10.240.166.33 ESX33.qa.bsn.sjc.aristanetworks.com vc8-mixed-stack VMware ESXi 8.0.2 build-223804790 63vmk1192.168.60.33
5 vc8 10.240.166.35 ESX35.qa.bsn.sjc.aristanetworks.com MGMTVMware ESXi 7.0.2 build-17867351 PowerEdge R430 2623 2
6 vc8 10.240.166.38 ESX38.qa.bsn.sjc.aristanetworks.com vc8-mixed-stack VMware ESXi 8.0.2 build-223804791 23 3vmk1192.168.60.38
dmf-rack#

Troubleshooting

Use the show fabric errors and show fabric warnings commands to troubleshoot and verify that everything is functioning as expected.

In the following example, the error message indicates that DMF could not find a route from the ESXi host to the DMF tunnel endpoint.
dmf-controller-1# show fabric errors
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ vCenter related error ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#vCenter Name Error
--|------------|--------------------------------------------------------------------------------------------------------------------------------------------|
1vc701Unable to locate a matching route for Mirror TCP/IP stack in host ESX37.qa.bsn.sjc.aristanetworks.com for DMF endpoint 192.168.200.254

Limitations

  • A port mirroring session remains on the original distributed virtual switch (DVS) when a VM migrates between DVSs.
  • Port mirroring sessions will persist on the DVS if a VM is renamed in vCenter while being monitored by DMF.
  • DMF cannot create a port mirroring session in the DVS if a conflicting session with the same VM exists in the DVS. This is not a limitation in vCenter 7.
  • When using mirror stack configuration in DMF, mirror sessions may still be created on the DVS for the ESXi host that doesn’t have a mirror stack configuration. This will result in no traffic being mirrored from the VM.
  • Auto-generated filter interfaces by vCenter integration should not be deleted from the policy. If they are deleted manually from the policy, they will not be automatically re-added.
  • DMF cannot monitor VMkernel adapters.

Wildcard Tunnels for VMware vCenter Monitoring

The current implementation of VMware vCenter creates one tunnel interface from every ESXi host to DMF.

Using a wildcard tunnel on DMF for VMware vCenter reduces the number of tunnels created.

Platform Compatibility

This feature is only compatible with switches that support wildcard tunneling.

Configuration

Configure wildcard tunnels using the CLI or the GUI.

Using the CLI to Create Wildcard Tunnels

The CLI construct wildcard-tunnels is available as a configuration option when configuring a VMware vCenter in DANZ Monitoring Fabric (DMF), as shown below:

Table 1. Commands
cluster Configure tunnel-endpoint for cluster
default-tunnel-endpoint Configure tunnel endpoints
description Describe this vCenter
hashed-password

Set the vCenter password (to log into vCenter)

host-name Set the vCenter hostname
mirror-type

Set the vCenter vm monitoring mode

mirrored-packet-length

Set the mirrored packet length

password

Set the vCenter password (to log into vCenter)

sampling-rate Set the packet sampling rate
user-name

Set the vCenter user name (to log into vCenter)

vm-monitoring Enter vm-monitoring config submode
wildcard-tunnels Enable wildcard tunnels

Enable wildcard tunnels by setting the above leaf parameter, as shown in the following example of vCenter configuration on the Controller node.

dmf-controller-1(config)# vcenter VC1
dmf-controller-1(config-vcenter)# wildcard-tunnels 
dmf-controller-1(config-vcenter)# show this
! vcenter
vcenter VC1
wildcard-tunnels
dmf-controller-1(config-vcenter)# 

Similarly, disable wildcard tunnels by issuing the no command as shown below:

dmf-controller-1(config-vcenter)# show this
! vcenter
vcenter VC1
wildcard-tunnels
dmf-controller-1(config-vcenter)# no wildcard-tunnels 
dmf-controller-1(config-vcenter)# show this
! vcenter
vcenter VC1
dmf-controller-1(config-vcenter)#

Show Commands

There is no specific show command for wildcard tunnels; however, check them in the vCenter running config. In addition, the show tunnels command shows the tunnels created for the selected vCenter configuration with a wildcard remote IP address.

Troubleshooting

Verify errors and warnings are clear using the show fabric errors and show fabric warnings commands. The show tunnels command displays tunnels created based on the vCenter configuration on the Controller with a wildcard remote IP address. Use the show switch <name> table gre-tunnel command to display tunnels programmed on the switch.

Using the GUI to Create Wildcard Tunnels

Use the DANZ Monitoring Fabric (DMF) GUI to create wildcard tunnels as outlined below.

Navigate to the Integration > VMware vCenter page.
Figure 35. VMware vCenter Add/Edit

Click the Menu icon.

As part of the Options step of the Add/Edit vCenter workflow, enable wildcard tunnels using the Create Wildcard Tunnels toggle input. By default, the feature is disabled.
Figure 36. VMware vCenter Create vCenter Options

Limitations

Select Broadcom® switch ASICs support wildcard tunnels; ensure your switch model supports this feature before configuring it for vCenter.

Please refer to the Platform Compatibility section for more information.

Minimum Permissions for Non-admin Users

For a non-admin user to add, remove, edit, or monitor a vCenter via the DANZ Monitoring Fabric (DMF), the privilege level assigned to the non-admin user is VSPAN operation. To assign VSPAN operation privileges to a user, perform the following steps:

  1. From the vCenter GUI, navigate to Menu > Administration.
  2. Once on the page, click on the Users and Groups link in the navigation bar on the left.
    Figure 37. Users and Groups
  3. Click on the Users tab and ensure the appropriate domain is selected (in this case, the domain is vsphere.local).
    Figure 38. Domain Selection
  4. Next, click on the ADD USER link and create the desired user. (In the example below, a user called dmf-aliceis created.)
    Figure 39. Add a New User
  5. Verify that the newly created user is on the Users and Groups page.
    Figure 40. Verify User Created
  6. After creating the desired user, create and assign a role to this user. Click on Roles under Access Control in the navigation bar on the left. Next, click on the + sign to add a new role.
    Figure 41. Add a New Role
  7. In the New Role pop-up dialog, select Distributed Switch from the left and then scroll down to find and select VSPAN operation as the role. Click Next and give the new role a new name. (In the example below the new role monitor-dmf is created.) Click Finish to create the new role.
    Figure 42. Select Role Type
    Figure 43. Save New Role
  8. Verify the creation of the new role on the Roles page.
    Figure 44. Verify New Role Created
  9. To assign the new role to the new user, click the Global Permissions link in the navigation bar on the left. Next, click on the + sign to assign the new role.
    Figure 45. Global Permissions
  10. In the Add Permission dialog, type the newly created username and select the newly created role, as shown in the figure below.
    Note:Do not forget to check mark the Propagate to children checkbox.
    Figure 46. Assign Role to User
  11. Verify assigning the newly created role to the newly created user.
    Figure 47. Verify Role Assignment to User

Monitor VMware vCenter Traffic by VM Names

Match VMware vCenter-specific information in the policy. Specifically, this feature matches traffic using VMware vCenter Virtual Machine (VM) names and requires DANZ Monitoring Fabric (DMF) vCenter integration.

Using the CLI to Monitor vCenter Traffic by VM Names

Configuration

This feature works with vCenter integration; therefore, configure vCenter Integration in DANZ Monitoring Fabric (DMF). Configure vCenter mapping in the policy, then define a policy match using VM names in the vCenter as illustrated in the following configuration example:
dmf-controller-1(config)# policy v1
dmf-controller-1(config-policy)# action forward
dmf-controller-1(config-policy)# filter-interface filter-interface
dmf-controller-1(config-policy)# delivery-interface delivery-interface
dmf-controller-1(config-policy)# filter-vcenter vcenter-name
dmf-controller-1(config-policy)# 1 match ip src-vm-name vm-name dst-vm-name vm-name
dmf-controller-1(config-policy)# 2 match ip6 src-vm-name vm-name

Show Commands

Enter the show running-config policy policy name command to display the configuration.
dmf-controller-1# show running-config policy v1

! policy
policy v1
action forward
delivery-interface delivery-interface
filter-interface filter-interface
filter-vcenter vcenter-name
1 match ip src-vm-name vm-name dst-vm-name vm-name
2 match ip6 src-vm-name vm-name
The show policy policy name command displays the policy information, including stats.
dmf-controller-1# show policy v2
Policy Name: v2
Config Status: active - forward
Runtime Status : installed
Detailed Status: installed - installed to forward
Priority : 100
Overlap Priority : 0
# of switches with filter interfaces : 1
# of switches with delivery interfaces : 1
# of switches with service interfaces: 0
# of filter interfaces : 1
# of delivery interfaces : 1
# of core interfaces : 0
# of services: 0
# of pre service interfaces: 0
# of post service interfaces : 0
Push VLAN: 5
Post Match Filter Traffic: -
Total Delivery Rate: -
Total Pre Service Rate : -
Total Post Service Rate: -
Overlapping Policies : none
Component Policies : none
Installed Time : 2023-12-21 19:00:39 UTC
Installed Duration : 50 minutes, 11 secs
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Match Rules ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Rule
-|--------------------------------------------------------------------------|
1 1 match ip src-vm-name DMF-RADIUS-SERVER-1 dst-vm-name DMF-TACACS-SERVER-1

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Filter Interface(s)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# DMF IF Switch IF NameState Dir Packets Bytes Pkt Rate Bit Rate Counter Reset Time
-|----------------|----------|----------|-----|---|-------|-----|--------|--------|------------------------------|
1 span_from_arista DELL-S4048 ethernet20 uprx0 0 0-2023-12-21 19:00:39.941000 UTC

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Delivery Interface(s)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# DMF IF Switch IF NameState Dir Packets Bytes Pkt Rate Bit Rate Counter Reset Time
-|------------|----------|------------|-----|---|-------|-----|--------|--------|------------------------------|
1 ubuntu-tools DELL-S4048 ethernet50/2 uptx0 0 0-2023-12-21 19:00:39.941000 UTC
~ Service Interface(s) ~
None.
~ Core Interface(s) ~
None.
~ Failed Path(s) ~
None.
The show vcenter vcenter name endpoint command displays the vCenter VM information, including networks.
dmf-controller-1# show vcenter vcenter1 endpoint 
#vCenter Name VM Name ESXi Host Name Network Interface Name MAC AddressIP Address Virtual Switch Portgroup Power State 
--|------------|---------|--------------|----------------------|--------------------------|------------------------------------------|--------------|-------------|-----------|
1vcenter1 ub-11-216 10.240.155.216 Network adapter 100:50:56:8b:4d:03 (VMware) 1.1.11.216/24, fe80::250:56ff:fe8b:4d03/64 DVS-DMFvlan11powered-on
2vcenter1 ub-12-216 10.240.155.216 Network adapter 100:50:56:8b:72:a0 (VMware) 1.1.12.216/24, fe80::250:56ff:fe8b:72a0/64 DVS-DMFvlan12powered-on
3vcenter1 ub-13-216 10.240.155.216 Network adapter 100:50:56:8b:c0:06 (VMware) 1.1.13.216/24, fe80::250:56ff:fe8b:c006/64 DVS-DMFvlan-10 powered-on
4vcenter1 ub-14-216 10.240.155.216 Network adapter 100:50:56:8b:d1:d9 (VMware) 1.1.14.216/24, fe80::250:56ff:fe8b:d1d9/64 DVS-DMFvlan-10 powered-on

Using the GUI to Monitor vCenter Traffic by VM Names

Configure vCenter VM name matches under the DANZ Monitoring Fabric (DMF) policies match rules section. For example:
  1. In the DMF GUI, navigate to the Monitoring > Policies page.
    Figure 48. DMF Policies
  2. Click Create Policy to create a new policy or edit an existing one by selecting a row from the Policies Table and clicking Edit.
    Figure 49. Create / Edit Policy
  3. Navigate to the Match Traffic tab.
    Figure 50. Match Traffic
  4. Click Configure a Rule to configure a custom match rule.
    Figure 51. Configure a Rule
  5. Set the EtherType to IPv4 or IPv6.
  6. Add the Source IP Address as the vCenter VM name. Select the Virtual Machine option from the Source IP Address drop-down and select a virtual machine from the VM Name drop-down.
    Figure 52. Source IP Address VM Name
  7. Add the Destination IP address as the vCenter VM name. Select the Virtual Machine option from the Destination IP Address drop-down and select a virtual machine from the VM Name drop-down.
    Figure 53. Destination IP VM
    Note: If the VM Name drop-down shows No Data, ensure only one vCenter is affiliated with the policy (under Traffic Sources).
  8. Click Add Rule to add the match rule to the policy.
  9. After entering other inputs as required, click Create Policy (or Save Policy) to save the configuration.

Troubleshooting

Fabric errors and warnings are very useful for troubleshooting this feature.

When using the show fabric warnings command, the following validation message displays when the vCenter integration cannot resolve the IP address for the VM name used in the policy.
dmf-controller-1# show fabric warnings
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Policy related warning~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Policy Name Warning
-|-----------|------------------------------------------------------------------------------------------------------------|
1 v1No IP found for VMs [ub-15-216, ub-216-multinic, ub-217-vlan10, ub-14-216, ub-11-216] associated with policy

When VM names used in a policy are matched, the following validation message content appears when a vCenter instance is not associated with the policy.

dmf-controller-1# show fabric warnings 
~~~~~~~~~~~~~~~~~~~ Policy related warning ~~~~~~~~~~~~~~~~~~~
# Policy Name Warning 
-|-----------|-----------------------------------------------|
1 v1No vCenter associated to policy with VM matches

Limitations

  • This feature only works with vCenter integration and a direct Switch Port Analyzer (SPAN) from a switch with ESXi traffic.
  • VM interface IP addresses connected to dvs will only be added to policy matches.
  • The system may use extra TCAM entries if the management network uses dvs.
  • Vmkernal names cannot be matched in the policy.
  • When a VM name with multiple vNICs (multiple IP addresses) matches the policy, a TCAM entry is added for all the IP addresses.
  • VM Names cannot be matched with the MAC option in the policy.
  • If the vCenter becomes disconnected, policies associated with the VM names may not get correct matches or traffic.

Tunneling Between Data Centers

This chapter describes establishing Generic Routing Encapsulation (GRE) and or Virtual Extensible LAN (VXLAN) tunnels between DMF switches in different locations or between a DMF switch and a third-party device.

Understanding Tunneling

DMF can forward traffic between two DMF switches controlled by the same Controller over a tunnel. Use this feature to extend a DMF deployment across multiple data centers or branch offices over networks connected by Layer-3 networks. This feature supports the centralization or distribution of tools and taps across multiple locations when they cannot be cabled directly.
Note:Refer to the DANZ Monitoring Fabric 8.6 Hardware Compatibility List for a list of the switches that support tunneling. The DANZ Monitoring Fabric 8.6 Verified Scale Guide indicates the number of tunnels supported by each supported switch (Verified Scalability Values/Encap Tunnels/Decap Tunnels).
When enabling tunneling between DMF switches, keep the following in mind:
  • Connect switch ports in the main data center and the remote location to the appropriate WAN routers and ping each interface to ensure IP connectivity is established.
  • Create tunnel endpoints and configure the tunnel attributes on each end of the tunnel.
  • The CRC Check option must be enabled if tunneling is enabled, which it is by default. If CRC checking is disabled, re-enable it before configuring a tunnel.
  • In the case of GRE tunnels, the optional gre-key-decap value on the receiving end must match the GRE key value of the sender. The option exists to set multiple values on the same tunnel to decapsulate traffic with different keys.
  • A single switch can initiate multiple tunnels. Configure a separate encap-loopback-interface for each tunnel (transmit-only or bidirectional).
  • Set the loopback-mode to mac on the encap-loopback-interface.
Figure 1. Connecting DMF Switches Using a Layer-2 GRE Tunnel
Note: For EOS switches running DMF 8.5: L2GRE tunneling is supported on Arista 7280R3 switches only and subject to the following limitations:
  • L2GRE tunnels are not supported on DMF 7280R and 7280R2 switches.
  • DSCP configuration is not supported.
  • Traffic steering for traffic arriving on an L2GRE tunnel will only allow for matching based on inner src/dst IP, IP protocol, and inner L4 src/dst port.
  • Packets may only be redirected to a single L2GRE tunnel.
  • Packets may not be load-balanced across multiple L2GRE tunnels.
  • Only IPv4 underlays in the default VRF are supported.
  • Matching on inner IPv6 headers may not be supported.
  • The maximum number of tunnels on EOS Jericho switches is 32.
  • There is no bi-directional tunnel support. The parent/uplink router-facing interface is used for either encapsulation or decapsulation, but not simultaneously.
  • When using tunnel-as-a-filter, there is no inner L3/L4 matching support immediately after decapsulation in the same switch pass. Using a loopback may work around this limitation.
  • VXLAN tunnels are currently NOT supported on 7280 switches.

Encapsulation Type

DANZ Monitoring Fabric (DMF) supports VXLAN tunnel type and Level-2 Generic Routing Encapsulation (L2GRE). The tunnel type is a per-switch configuration, setting the switch pipeline to VXLAN or L2GRE. Once the switch pipeline is set, all tunnels configured on the switch will use the same tunnel type.

The encapsulation type can be configured in the GUI while adding a new switch into the DMF Controller, as shown in the figure below:

The encapsulation type can be edited for an existing switch from the Fabric > Switches > Configure Switch page, as shown in the figure below:
The encapsulation type can also be configured or edited from the CLI in configuration mode:
Ctrl-1(config)# switch Switch-1
Ctrl-1(config-switch)# tunnel-type
gre Select GRE as the tunnel type of the switch. (default selection)
vxlan Select VxLAN as the tunnel type of the switch.
The switch pipeline mode can be viewed from the CLI using the following command:
Ctrl-1(config)# show switch
# Switch Name IP AddressState Pipeline Mode
-|-------------------|---------------------------|---------|----------------------------------|
1 Switch-1fe80::d6af:f7ff:fef9:e2b0%9 connected l3-l4-offset-match-push-vlan-vxlan
2 Switch-2fe80::e6f0:4ff:fe69:6aee%9connected l3-l4-offset-match-push-vlan
3 Switch-3fe80::e6f0:4ff:fe78:1ffe%9connected l3-l4-offset-match-push-vlan-vxlan
Ctrl-1(config)#

In the above CLI output, Switch-1 and Switch-3 use the VXLAN tunnel type, as seen in the Pipeline Mode column. Switch-2 is using the L2GRE tunnel type.

Using Tunnels in Policies

Tunnels can be used as a core link, filter interface, or delivery interface. The most common use case is linking multiple sites, using the tunnel as a core link. If used as a core link, DMF automatically discovers the link as if it were a physical link and similarly determines connectivity (link-state). If the tunnel goes down for any reason, DMF treats the failure as it would a physical link failure.

Another typical use case for the tunnel is as a filter interface to decapsulate L2 GRE/VXLAN tunneled production traffic or a tunnel initiated by another DMF instance managed by a different DMF Controller. Use the tunnel endpoint as a delivery interface to encapsulate filtered monitoring traffic to send to analysis tools or another DANZ Monitoring Fabric managed by a different DMF Controller.

Note: By default, sFlow®* and other Arista Analytics metadata cannot be generated for decapsulated L2 GRE/VXLAN tunneled production traffic on a tunnel interface configured as a filter interface. To generate this metadata, create a policy with a filter interface as a tunnel interface and send the decapsulated traffic to a MAC loopback port configured in a filter-and-delivery role. Now, create a second policy with the filter interface as the MAC loopback port and the delivery interface going to the tools. The sFlow and metadata will now be generated for the decapsulated tunnel traffic.

Using the GUI to Configure a GRE Tunnel

To configure a VXLAN tunnel, perform the following steps:

  1. Select Fabric > Switch.
  2. On the Switches page, click the Menu control next to the switch or interface to include in the tunnel and select Create Tunnel.
    Alternatively, configure tunnels from the Fabric > Interfaces page by clicking on the Menu Control > Create Tunnel option. The system displays the dialog as shown in the figure below:
    Figure 2. Configure VXLAN Tunnel
  3. Complete the fields on this page as described below.
    • Switch: From the drop-down, select the DMF switch.
    • Encapsulation Type: The encapsulation type will automatically be selected based on the pipeline mode of the selected switch.
    • Name: Name of the tunnel, beginning with the word tunnel.
    • Rate Limit (Optional): Packets entering the tunnel can be rate-limited to restrict the bandwidth usage of the tunnel. This can help ensure that a WAN link is not saturated with monitoring traffic being tunneled between sites. This setting is applicable on the tunnel encapsulation side.
    • Direction: Direction can be bidirectional, transmit-only, or receive-only. For bidirectional tunnels, set the tunnel direction to bidirectional on both ends. For uni-directional tunnels from remote to main datacenter, the tunnel direction is transmit-only on the remote datacenter switch and receive-only on the main data center switch.
    • Local IP: Local IP address and subnet mask in CIDR format (/nn).
    • Gateway IP: IP address of the default (next-hop) gateway.
    • Remote IP: This is the IPv4 address of the other end (remote end) of the tunnel.
    • Parent Interface: Physical port or port-channel interface associated with the tunnel. This is the destination interface for the tunnel.
    • Loopback Interface: A physical interface on each switch with a transmit-only or a bidirectional tunnel endpoint. Use this physical interface for tunnel encapsulation and not for any other DMF purpose, such as a filter, delivery, service, or core interface.
    • DSCP (Optional): Mark the tunnel traffic with the specified DSCP value.
  4. After configuring the appropriate options, click Submit.
    Note:Configure this procedure on both switches at each end of the tunnel. Set the Auto VLAN mode to Push Per Policy or Push Per Filter Interface.

Using the CLI to Configure a GRE Tunnel

To configure a GRE tunnel using the CLI, perform the following steps:

  1. Connect switch ports (on remote and main datacenter) to their respective WAN routers and ensure they can communicate via IP.
  2. Enable tunneling on the DMF network by entering the following command from config mode:
    controller-1(config)# tunneling
    Tunneling is an Arista Licensed feature. Please ensure that you have purchased the license
    for tunneling before using this feature. enter "yes" (or "y") to continue: yes
    controller-1(config)#
  3. Configure the MAC loopback mode, as shown in the following example:
    controller-1(config)# switch DMF-CORE-SWITCH
    controller-1(config-switch)# interface ethernet7
    controller-1(config-switch-if)# loopback-mode mac
  4. Create tunnel endpoints.
The following CLI example configures a bi-directional tunnel from remote-dc1-filter-sw to main-dc-delivery-sw:
!
switch remote-dc1-filter-sw
gre-tunnel-interface tunnel1
remote-ip 192.168.200.50
gre-key-decap 4097 === 4097 is the VPN key used for the tunnel ID
parent-interface ethernet6
local-ip 192.168.100.50 mask 255.255.255.0 gateway-ip 192.168.100.1
direction bidirectional encap-loopback-interface ethernet38
!
switch main-dc-delivery-sw
gre-tunnel-interface tunnel1
remote-ip 192.168.100.50
gre-key-decap 4097 === 4097 is the VPN key used for the tunnel ID
parent-interface ethernet5
local-ip 192.168.200.50 mask 255.255.255.0 gateway-ip 192.168.200.1
direction bidirectional encap-loopback-interface ethernet3
The following CLI example configures a uni-directional tunnel from remote-dc1-filter-sw to main-dc-delivery-sw:
!
switch remote-dc1-filter-sw
gre-tunnel-interface tunnel1
remote-ip 192.168.200.50
gre-key-decap 4097 === 4097 is the VPN key used for the tunnel ID
interface parent-interface ethernet6
local-ip 192.168.100.50 mask 255.255.255.0 gateway-ip 192.168.100.1
direction transmit-only encap-loopback-interface ethernet38
!
switch main-dc-delivery-sw
gre-tunnel-interface tunnel1
remote-ip 192.168.100.50
gre-key-decap 4097 === 4097 is the VPN key used for the tunnel ID
parent-interface ethernet5
local-ip 192.168.200.50 mask 255.255.255.0 gateway-ip 192.168.200.1
direction receive-only

Using the CLI to Rate Limit the Packets on a GRE Tunnel

Packets entering the GRE tunnel can be rate-limited to limit bandwidth usage by the tunnel and help ensure that a WAN link is not saturated with monitoring traffic being tunneled between sites. This setting is applicable on the tunnel encapsulation side.
Note: The minimum recommended value for rate limiting on the tunnel interface is 25 kbps. When attempting to set a value below this limit, the switch will still set the rate limit value to 25 kbps.
switch DMF-CORE-SWITCH-1
gre-tunnel-interface tunnel1
direction bidirectional encap-loopback-interface ethernet10
------------------------------example truncated------------
interface ethernet10
rate-limit 1000

Using the CLI to View GRE Tunnel Interfaces

All CLI show commands for regular interfaces apply to GRE tunnel interfaces.

Use the show running-config command to view the configuration of tunnel interfaces.

Enter the show tunnel command to see a tunnel interface's configuration parameters and runtime state.
controller-1# show tunnel
# Switch DPID Tunnel Name Tunnel Status Direction Src IP Dst IP Parent NameLoopback Name
-|-------------------------|-----------|------------------|-------------|------------|------------|------------|-------------|
1 DMF-CORE-SWITCH-1 tunnel1 ESTABLISHED_TUNNEL bidirectional 198.82.215.1 216.47.143.1 ethernet5:1 ethernet6
2 DMF-CORE-SWITCH-2 tunnel1 ESTABLISHED_TUNNEL bidirectional 216.47.143.1 198.82.215.1 ethernet11:3 ethernet5
3 DMF-CORE-SWITCH-2 tunnel2 ESTABLISHED_TUNNEL bidirectional 192.168.43.1 192.168.42.1 ethernet11:4 ethernet17
4 DMF-CORE-SWITCH-3 tunnel2 ESTABLISHED_TUNNEL bidirectional 192.168.42.1 192.168.43.1 ethernet6 ethernet33

controller-1# show tunnel switch DMF-CORE-SWITCH-2
# Switch DPID Tunnel Name Tunnel Status Direction Src IP Dst IP Parent NameLoopback Name
-|-------------------------|-----------|------------------|-------------|------------|------------|------------|-------------|
1 DMF-CORE-SWITCH-2 tunnel1 ESTABLISHED_TUNNEL bidirectional 216.47.143.1 198.82.215.1 ethernet11:3 ethernet5
2 DMF-CORE-SWITCH-2 tunnel2 ESTABLISHED_TUNNEL bidirectional 192.168.43.1 192.168.42.1 ethernet11:4 ethernet17

controller-1# show tunnel switch DMF-CORE-SWITCH-2 tunnel1
# Switch DPID Tunnel Name Tunnel Status Direction Src IP Dst IP Parent NameLoopback Name
-|-------------------------|-----------|------------------|-------------|------------|------------|------------|-------------|
1 DMF-CORE-SWITCH-2 tunnel1 ESTABLISHED_TUNNEL bidirectional 216.47.143.1 198.82.215.1 ethernet11:3 ethernet5
controller-1#

Using the GUI to Configure a VXLAN Tunnel

To configure a VXLAN tunnel using the GUI, perform the following steps:

  1. Select Fabric > Switch.
  2. On the Switches page, click the Menu control next to the switch or interface to include in the tunnel and select Create Tunnel.
    Alternatively, configure tunnels from the Fabric > Interfaces page by clicking on the Menu Control > Create Tunnel option. The system displays the dialog as shown in the figure below:
    Figure 3. Configure VXLAN Tunnel
  3. Complete the fields on this page as described below:
    • Switch: From the drop-down, select the DMF switch.
    • Encapsulation Type: Encapsulation type will automatically be selected based on the pipeline mode of the selected switch.
    • Name: Name of the tunnel, beginning with the word tunnel.
    • Rate Limit (Optional): Packets entering the tunnel can be rate-limited to restrict bandwidth usage of the tunnel. This can help ensure that a WAN link is not saturated with monitoring traffic being tunneled between sites. This setting is applicable on the tunnel encap side.
    • Direction: bidirectional, transmit-only, or receive-only. For bidirectional tunnels, set tunnel-direction to bidirectional on both ends. For uni-directional tunnels from remote to main datacenter, tunnel-direction is transmit only on the remote datacenter switch and the tunnel-direction is receive-only on the main data center switch.
    • Local IP: Local IP address and subnet mask in CIDR format (/nn).
    • Gateway IP: IP address of the default (next-hop) gateway.
    • Remote IP: This is the IPv4 address of the other end (remote end) of the tunnel.
    • Parent Interface: Physical port or port-channel interface associated with the tunnel. This is the destination interface for the tunnel.
    • Loopback Interface: A physical interface on each switch with a transmit-only or a bidirectional tunnel endpoint. Use this physical interface for tunnel encapsulation and not for any other DMF purpose, such as a filter, delivery, service, or core interface.
    • DSCP (Optional): Mark the tunnel traffic with the specified DSCP value.
  4. After configuring the appropriate options, click Submit.
Note:Configure this procedure on both switches at each end of the tunnel. Set the Auto VLAN mode to Push Per Policy or Push Per Filter Interface.

Using the CLI to Configure a VXLAN Tunnel

To configure a VXLAN tunnel using the CLI, perform the following steps:

  1. Connect switch ports (on remote and main datacenter) to their respective WAN routers and ensure that they can communicate via IP.
  2. Enable tunneling on the DMF network by entering the following command from config mode:
    controller-1(config)# tunneling
    Tunneling is an Arista Licensed feature. Please ensure that you have purchased the license
    for tunneling before using this feature. enter "yes" (or "y") to continue: yes
    controller-1(config)#
  3. Configure the MAC loopback mode, as shown in the following example:
    controller-1(config)# switch DMF-CORE-SWITCH
    controller-1(config-switch)# interface ethernet7
    controller-1(config-switch-if)# loopback-mode mac
  4. Create tunnel endpoints.
The following CLI example configures a bi-directional tunnel from remote-dc1-filter-sw to main-dc-delivery-sw:
!
switch remote-dc1-filter-sw
vxlan-tunnel-interface tunnel1
remote-ip 192.168.200.50
parent-interface ethernet6
local-ip 192.168.100.50 mask 255.255.255.0 gateway-ip 192.168.100.1
direction bidirectional encap-loopback-interface ethernet38
!
switch main-dc-delivery-sw
vxlan-tunnel-interface tunnel1
remote-ip 192.168.100.50
parent-interface ethernet5
local-ip 192.168.200.50 mask 255.255.255.0 gateway-ip 192.168.200.1
direction bidirectional encap-loopback-interface ethernet3
The following CLI example configures a uni-directional tunnel from remote-dc1-filter-sw to main-dc-delivery-sw:
!
switch remote-dc1-filter-sw
vxlan-tunnel-interface tunnel1
remote-ip 192.168.200.50
interface parent-interface ethernet6
local-ip 192.168.100.50 mask 255.255.255.0 gateway-ip 192.168.100.1
direction transmit-only encap-loopback-interface ethernet38
!
switch main-dc-delivery-sw
vxlan-tunnel-interface tunnel1
remote-ip 192.168.100.50
parent-interface ethernet5
local-ip 192.168.200.50 mask 255.255.255.0 gateway-ip 192.168.200.1
direction receive-only

Using the CLI to Rate Limit the Packets on a VXLAN Tunnel

Packets entering the VXLAN tunnel can be rate-limited to limit bandwidth usage by the tunnel and help ensure that a WAN link is not saturated with monitoring traffic being tunneled between sites. This setting is applicable on the tunnel encapsulation side.
Note:The minimum recommended value for rate limiting on the tunnel interface is 25 kbps. When attempting to set a value below this limit, the switch will still set the rate limit value to 25 kbps.
switch DMF-CORE-SWITCH-1
vxlan-tunnel-interface tunnel1
direction bidirectional encap-loopback-interface ethernet10
<snip>
interface ethernet10
rate-limit 1000

Using the CLI to View VXLAN Tunnel Interfaces

All CLI show commands for regular interfaces apply to tunnel interfaces.

Use the show running-config command to display the configuration of tunnel interfaces.

Enter the show tunnel command to see the configuration parameters and runtime state for a VXLAN tunnel interface.
controller-1# show tunnel
# Switch DPID Tunnel Name Tunnel Status Direction Src IP Dst IP Parent Name Loopback
Name
-|-------------------------|-----------|-----------------|-------------|------------|------------|------------|-------------|
1 DMF-CORE-SWITCH-1 tunnel1 ESTABLISHED_TUNNEL bidirectional 198.82.215.1 216.47.143.1 ethernet5:1 ethernet6
2 DMF-CORE-SWITCH-2 tunnel1 ESTABLISHED_TUNNEL bidirectional 216.47.143.1 198.82.215.1 ethernet11:3 ethernet5
3 DMF-CORE-SWITCH-2 tunnel2 ESTABLISHED_TUNNEL bidirectional 192.168.43.1 192.168.42.1 ethernet11:4 ethernet17
4 dMF-CORE-SWITCH-3 tunnel2 ESTABLISHED_TUNNEL bidirectional 192.168.42.1 192.168.43.1 ethernet6 ethernet33
controller-1#
controller-1# show tunnel switch DMF-CORE-SWITCH-2
# Switch DPID Tunnel Name Tunnel Status Direction Src IP Dst IP Parent Name Loopback Name
-|-----------------------|-----------|------------------|-------------|------------|------------|------------|-------------|
1 DMF-CORE-SWITCH-2 tunnel1 ESTABLISHED_TUNNEL bidirectional 216.47.143.1 198.82.215.1 ethernet11:3 ethernet5
2 DMF-CORE-SWITCH-2 tunnel2 ESTABLISHED_TUNNEL bidirectional 192.168.43.1 192.168.42.1 ethernet11:4 ethernet17
controller-1#
controller-1# show tunnel switch DMF-CORE-SWITCH-2 tunnel1
# Switch DPID Tunnel Name Tunnel Status Direction Src IP Dst IP Parent Name Loopback Name
-|-----------------------|-----------|------------------|-------------|------------|------------|------------|-------------|
1 DMF-CORE-SWITCH-2 tunnel1 ESTABLISHED_TUNNEL bidirectional 216.47.143.1 198.82.215.1 ethernet11:3 ethernet5
controller-1#

Viewing or Modifying Existing Tunnels

To view or modify the configuration of an existing tunnel, use the Fabric > Interfaces option. To view the tunnel configuration, expand the interface. DMF displays the tunnel configuration as illustrated in the following figure.
Tip: When multiple interfaces are present, use the Filter feature to locate tunnel interfaces after typing the first few letters of the word tunnel.
Figure 4. Tunnel Interfaces

The expanded row displays the status and other properties of the tunnel configured for the selected interface. Use the Menu control and select Configure Tunnel to modify the tunnel configuration. Select Delete Tunnel to remove the tunnel.

Using a Tunnel with User-defined Offsets

With an L2-GRE or VXLAN tunnel, matching traffic on a user-defined offset results in dropping interesting traffic. The tunnel header throws off the offset calculation, and the selected traffic may be dropped. This behavior is due to how switch hardware calculates the anchor and offset concerning incoming packets. When the core link is a tunnel, the anchor and offset calculation differs when encapsulating packets compared to when decapsulating.

There are two ways to work around this issue:
  • Avoid matching on user-defined offsets on tunnel interfaces
  • Avoid using a tunnel as a core link when matching on a user-defined offset

Avoid matching on user-defined offsets when the ingress filter interface is a tunnel by filtering on the user-defined offset before the traffic enters a tunnel used as a filter interface. This preserves the LLDP messaging on the core tunnel link, but it requires an extra physical loopback interface on the encapsulating switch. The figure below illustrates both of these workarounds. In either case, a UDF match is applied to the ingress traffic on filter interface F. For example, the policy might apply a match at offset 20 after the start of the L4 header. In both workarounds, the policy has been split into two policies:

P1: F to D1, match on user-defined offset P2: F1 to D, match any.

In the example on the left, the ingress interface on the decapsulating switch, which is included in a core tunnel link, no longer has to calculate the user-defined offset. This solution preserves LLDP messages on the tunnel link but requires an extra loopback interface.
Figure 5. Using Tunnels with User-Defined Offsets

In the example on the right, the tunnel endpoints are configured as filter and delivery interfaces. This solution avoids using the tunnel as a core link and does not require an extra physical loopback interface. However, LLDP updates are lost on the tunnel link.

Wildcard Tunnels on SAND Platforms

The Wildcard tunneling feature allows the DANZ Monitoring Fabric (DMF) to decapsulate L2GRE-based tunneled traffic from any remote source. This feature, supported on Switch Light OS (SWL) based DMF switches in prior releases, now allows wildcard tunnels on Arista EOS-based DMF switches.

Platform Compatibility

EOS switches with Jericho2 or higher ASICs compatible with DMF 8.5.0 that support L2GRE tunneling. L2GRE tunneling on EOS SAND platforms is only supported on Arista 7280R3 switches.

Configuring Wildcard Tunnels Using the CLI

Perform the following steps to configure the tunnels.
  1. Enable the tunneling feature before configuring tunnels on a switch using the tunneling command. Enter yes when prompted to continue.
    dmf-controller-1(config)# tunneling
    Tunneling is an Arista Licensed feature. 
    Please ensure that you have purchased the license for tunneling before using this feature.
    Enter "yes" (or "y") to continue: y
    dmf-controller-1(config)#
  2. Configure a tunnel by using the remote-ip as 0.0.0.0.
    dmf-controller-1(config)# switch main-dc-delivery-sw
    dmf-controller-1(config-switch)# gre-tunnel-interface tunnel1
    dmf-controller-1(config-switch)# remote-ip 0.0.0.0 === this is to enable wildcard tunnel
    dmf-controller-1(config-switch)# gre-key-decap 4097
    dmf-controller-1(config-switch)# parent-interface ethernet5
    dmf-controller-1(config-switch)# local-ip 192.168.200.50 mask 255.255.255.0 gateway-ip 192.168.200.1
    dmf-controller-1(config-switch)# direction receive-only
Please refer to the Tunneling Between Data Centers chapter for more information on using L2GRE tunnels in DANZ Monitoring Fabric (DMF).

Show Commands

All CLI show commands for regular interfaces apply to GRE tunnel interfaces. Use the show running-config command to view the configuration of tunnel interfaces.

Enter the show tunnel command to view a tunnel interface's configuration parameters and runtime state.

Example

dmf-controller-1# show tunnel
# Switch DPID Tunnel Name Tunnel StatusDirection Src IP Dst IP Parent NameLoopback Name
-|-----------------|-----------|------------------|-------------|------------|------------|------------|-------------|
1 DMF-CORE-SWITCH-1 tunnel1 ESTABLISHED_TUNNEL bidirectional 198.82.215.1 216.47.143.1 ethernet5:1ethernet6
2 DMF-CORE-SWITCH-2 tunnel1 ESTABLISHED_TUNNEL bidirectional 216.47.143.1 198.82.215.1 ethernet11:3 ethernet5
3 DMF-CORE-SWITCH-2 tunnel2 ESTABLISHED_TUNNEL bidirectional 192.168.43.1 192.168.42.1 ethernet11:4 ethernet17
4 DMF-CORE-SWITCH-3 tunnel2 ESTABLISHED_TUNNEL bidirectional 192.168.42.1 192.168.43.1 ethernet6ethernet33

Configuring Wildcard Tunnels Using the GUI

In the DANZ Monitoring Fabric (DMF) UI, enable Tunneling by navigating to the DMF Features page and clicking on the gear icon in the navigation bar.

Figure 6. DMF Navigation Menu
Locate the Tunneling card feature on the page.
Tip: Use the search bar to quickly locate the feature.
Figure 7. DMF Features

Turn on the toggle switch to enable the Tunneling feature on DMF.

Steps to Enable Wildcard Tunnels

  1. Locate the Switches tab on the Fabric > Switches page.
    Figure 8. Fabric > Switches
  2. Click the Create Tunnel option from the table menu.
    Figure 9. Create Tunnel
  3. Create a Tunnel (select Encapsulation Type as GRE) by entering the required fields. To enable wildcard support, enter the Remote IP input with data as shown below.
    Figure 10. Configure Tunnel
  4. Click Submit to save the tunnel configuration.
*sFlow® is a registered trademark of Inmon Corp.

Using the DMF Recorder Node

This chapter describes configuring the DANZ Monitoring Fabric (DMF) Recorder Node (RN) to record packets from DMF filter interfaces. For related information, refer to the following:

Overview

The DANZ Monitoring Fabric (DMF) Recorder Node (RN) integrates with the DMF for single-pane-of-glass monitoring. A single DMF Controller can manage multiple RNs, delivering packets for recording through Out-of-Band policies. The DMF Controller also provides central APIs for packet queries across one or multiple RNs and for viewing errors, warnings, statistics, and the status of connected RNs.

A DMF out-of-band policy directs matching packets for recording to one or more RNs. An RN interface identifies the switch and port used to attach the RN to the fabric. A DMF policy treats these as delivery interfaces and adds them to the policy so that flows matching the policy are delivered to the specified RN interfaces.

Configuration Summary

At a high level, follow the below three steps to use the Recorder Node (RN).

Step 1: Define the RN.

Step 2: Define a DANZ Monitoring Fabric (DMF) policy to select the traffic to forward to the RN.

Step 3: View and analyze the recorded traffic.

The RN configuration on the DMF Controller includes the following:
  • Name: Each RN requires a unique name among recorder nodes in the connected fabric. Removing the name removes the entire configuration for the given RN.
  • Management MAC address: Each RN must have a unique management interface MAC address in the connected fabric.
  • Packet removal policy: Defines the behavior when the RN disks reach capacity. The default policy causes the overwriting of the earliest recorded packets by the most recent packets. The other option is to stop recording and wait until space is available.
  • Record enable, or Record disable: DMF enables packet recording by default but can be enabled or disabled for a specific RN.
  • Static auth tokens: Static auth tokens are pushed to each RN as an alternative form of authentication in headless mode when the DMF Controller is unreachable or by third-party applications that do not have or do not need DMF Controller credentials.
  • Controller auth token: The RN treats the controller as an ordinary client and requires it to present valid credentials as an authentication token. The DMF Controller authentication token is automatically generated and resettable upon request.
  • Pre-buffer: This buffer, defined in minutes, is used for proactive network monitoring without recording and retaining unnecessary packets. Once the buffer is full, DMF deletes the oldest packets.
  • Maximum disk utilization: This defines the maximum disk utilization as a percentage between 5% and 95%. After reaching the configured utilization, DMF enforces the packet removal policy. The default maximum disk utilization is 95%.
  • Maximum packet age: This defines the maximum age in minutes of any packet in the RN. Use it with the packet removal policy to control when packets are deleted based on age rather than disk utilization alone. When not set, DMF does not enforce the maximum packet age and keeps packets until the maximum disk utilization is reached.

Indexing Configuration

The Recorder Node (RN) indexing configuration defines the fields used to query packets on the RN. By default, DMF enables all indexing fields in the indexing configuration. Selectively disable the specific indexing fields not required in RN queries.

Disabling indexing fields has two advantages. First, it reduces the index space required for each packet recorded. Second, it improves query performance by reducing unnecessary overhead. Arista recommends disabling unnecessary indexing fields.

The RN supports the following indexing fields:
  • MAC Source
  • MAC Destination
  • VLAN 1: Outer VLAN ID
  • VLAN 2: Inner/Middle VLAN ID
  • VLAN 3: Innermost VLAN ID
  • IPv4 Source
  • IPv4 Destination
  • IPv6 Source
  • IPv6 Destination
  • IP protocol
  • Port Source
  • Port Destination
  • MPLS
  • Community ID
  • MetaWatch Device ID
  • MetaWatch Port ID
Note: Enable the Outer VLAN ID indexing field to query the RN using a DANZ Monitoring Fabric (DMF) policy name or a DMF filter interface name.

To understand leveraging an indexing configuration, consider the following examples:

Example 1: To query packets based on applications defined by unique transport ports, disable all indexing fields except source and destination transport ports, saving only transport ports as metadata for each packet recorded. This technique greatly reduces per-packet index space consumption and increases RN query speed.

However, this will impact an effective query on any other indexing field because that metadata was not saved when the packets were recorded.

Example 2: The RN supports community ID indexing, a hash of IP addresses, IP protocol, and transport ports that identify a flow of interest. Suppose the RN use case is to query based on community ID. In that case, indexing on IPv4 source and destination addresses, IPv6 source and destination addresses, IP protocol, and transport port source and destination addresses might be redundant.

Pre-buffer Configuration and Events

The Recorder Node (RN) pre-buffer is a circular buffer recording received packets. When enabled, the pre-buffer feature allows for the retention of the packets received by the RN for a specified length of time prior to an event that triggers the recording of buffered and future packets to disk. Without an event, the RN will record into this buffer, deleting the oldest packets when the buffer reaches capacity.

When an RN event is triggered, DMF saves packets in the pre-buffer to disk. The packets received from the time of the event trigger to the time of the event termination are saved directly to disk upon termination of the event. However, the received packets are also retained in the pre-buffer until the next event is triggered. By default, the pre-buffer feature is disabled, indicated by a value of zero minutes.

For example, when configuring the pre-buffer to thirty minutes, the buffer will receive up to thirty minutes of packets. When triggering an event, DMF records the packets currently in the buffer to disk, and packets newly received by the RN bypass the buffer and are written directly to disk until the termination of the event. When terminating the event, the pre-buffer resets, accumulating received packets for up to the defined thirty-minute pre-buffer size.

The packets affiliated with an event can be queried, replayed, or analyzed using any RN query. Each triggered event is identified by a unique, user-supplied name, used in the query to reference packets recorded in the pre-buffer before and during the event.

Using an Authentication Token

When using a DANZ Monitoring Fabric (DMF) Controller authentication token, the Recorder Node (RN) treats the DMF Controller as an ordinary client, requiring it to present valid credentials either in the form of an HTTP basic username and password or an authentication token.

Static authentication tokens are pushed to each RN as an alternative form of authentication in headless mode when the DMF Controller is unreachable or by third-party applications that do not have or do not need Controller credentials.

Using the GUI to Add a Recorder Device

To configure a Recorder Node (RN) or update the configuration of an existing RN, follow the steps below:
  1. Select Monitoring > Recorder Nodes from the main menu bar of the DANZ Monitoring Fabric (DMF) GUI.

    The system displays the page shown below.

    Figure 1. Recorder Nodes
  2. To add a new RN, click the provision control (+) in the Recorder Nodes Devices table.
    Figure 2. Provision Recorder Node
  3. Enter the following information in the required fields:
    • Assign a name to the RN.
    • Set the MAC address of the RN. Obtain the MAC address from the chassis ID of the connected device, using the Fabric > Connected Devices option.
  4. Configure the following options as needed:
    • Recording: Recording is enabled by default. To disable recording on the RN, move the Recording toggle switch to Off. When recording is enabled, the RN records the matching traffic directed from the filter interface defined in a DMF policy.
    • Disk Full Policy: Change the Disk Full Policy to Stop and Wait if required. The default packet removal policy is Rolling FIFO (First In First Out), which means the oldest packets will be deleted to make room for newer packets. This occurs only when the RN disks are full. The alternative removal policy is Stop and Wait, which causes the RN to stop recording when the disks are full and wait until disk space becomes available. Disk space can be made available by leveraging the RN delete operation to remove all or selected time ranges of recorded packets.
    • Backup Disk Policy: Specify the disk backup policy to as desired. This is a mandatory field while creating a new recorder. Select from one of the three following options:
      • No Backup: This is the default option and is also the recommended option when no extra disk is available. It is also a continuation of the behavior supported in previous releases.
      • Remote Extend: In this option, recording is performed on the local disks. When full, the recording continues on a remote Isilon cluster mounted over NFS. In this mode, the remote disks are called backup disks. With regard to the Disk Full Policy, if set to:
        • Stop and Wait: Recording stops when both local and remote disks become full.
        • Rolling FIFO: When the configured threshold is reached, the oldest files from both disks are removed until the disk usage returns below the threshold number.
      • Local Fallback: In this option, recording is performed on a remote Isilon cluster mounted over NFS. If the connection between the Recorder Node and the remote cluster fails, the recording is performed on the local disks until the failure is resolved. In this mode, the local disks are called backup disks. With regard to the Disk Full Policy, if set to:
        • Stop and Wait: Recording stops when the remote disks become full.
        • Rolling FIFO: When the configured threshold is reached, the oldest files from both disks are removed until the disk usage returns below the threshold number.
      Note: A connection failure should not occur due to a misconfiguration of the NFS server on the DMF Controller. In such cases, the recording stops until the Controller’s configuration is fixed.
    • Max Packet Age: Change the Max Packet Age to set the maximum number of minutes that recorded packets will be kept on the RN. Packets recorded are discarded after the specified number of minutes. This defines the maximum age in minutes of any packet in the RN. It can be used in combination with the Disk Full Policy to control when packets are deleted based on age rather than disk utilization alone. When unset, Max Packet Age is not enforced.
    • Pre-Buffer: Assign the number of minutes the RN pre-buffer allows for windowed retention of packets received by the RN for a specified length of time. By default, the Pre-Buffer is set to zero minutes (disabled). With a nonzero Pre-Buffer setting and triggering a recorder event, any packets in the pre-buffer are saved to disk, and any packets received by the recorder after the trigger are saved directly to disk. When terminating an ongoing recorder event, a new pre-buffer is established in preparation for the next event.
    • Max Disk Utilization: Specify the maximum utilization allowed on the index and packet disks. The Disk Full Policy will be enforced at this limit. If left unset, then the disks space will be used to capacity.
    • Parse MetaWatch Trailer: Determine the parsing of the MetaWatch trailer.
      • Off: When set to Off, the RN will not parse the MetaWatch trailer, even if it is present in incoming packets.
      • Auto: When set to Auto, the RN will look for a valid timestamp in the last 12 bytes of the packet. If it matches the system timestamp closely enough, the trailer will be parsed by the RN.
      • Force: When set to Force, RN will assume the last 12 bytes of packet is a MetaWatch trailer and parse it, even if it did not find a valid timestamp.
  5. Click Save to save and close the configuration page or click NEXT to continue with the configuration. Displays the Indexing tab of the Provision Recorder Node page.
    Figure 3. Provision Recorder Node-Indexing
  6. All the indexing options are enabled by default. To disable any of the indexing behaviors, move the toggle switch of the respective item to the left. For more details, see the Indexing Configuration section.
  7. Click Save to save and close the configuration page or click NEXT to continue with the configuration. Displays the Network tab of the Provision Recorder Node page.
    Figure 4. Provision Recorder Node

Configuring a Node to Use Local Storage

 

To configure a node to use local storage, use the following steps:
  1. Network: To use local storage, set the Auxiliary NIC Configuration to default (No) as shown in the figure below.
    Figure 5. Network Provisioning
  2. Storage: To use local storage, set the Index Disk Configuration and Packet Disk Configuration to default (No) as shown in the figure below.
    Figure 6. Configure to Use Local Storage
  3. Click Save to add the recorder node configuration to the Controller.

Configuring a Node to Use External Storage

In order to store packets on external storage using an NFS mount, connect the Recorder Node's (RN) auxiliary interface to the same network and subnet where the NFS storage resides, as displayed in the figure below.
Figure 7. Topology to Use External Storage
Note: Create the volume for the index and packet on the NFS storage first. Refer to the vendor-specific NFS storage documentation about creating the volume (or path).
To configure an RN for external NFS storage, update the configuration of an existing recorder node or add a new node with the following steps:
Note: DMF release 7.2 only supports Isilon NFS storage.
  1. Network: For external NFS storage, such as Isilon, connect the auxiliary interface of the RN to a network and subnet that is reachable to Isilon NFS storage. Set the Auxiliary NIC Configuration toggle switch to YES and assign an IP address to the auxiliary interface, as shown in the figure below. Ensure the IP address for the auxiliary interface is not in the same subnet as the RN management IP address.
    Figure 8. Provision External Storage
  2. Storage: To specify the location of the external NFS storage, configure the following options:
    • Index Disk Configuration and Packet Disk Configuration are disabled by default (toggle switch set to No). Set the toggle switch for both Index Disk Configuration and Packet Disk Configuration to Yes.
    • NFS Server [Index Disk Configuration and Packet Disk Configuration]: assign the IP address or hostname for the NFS Server (e.g., Isilon Smart Connect hostname).
    • Transport Port of NFS Service [Index Disk Configuration and Packet Disk Configuration]: DMF uses the default value if no value is specified (2049). Specify a value if the NFS storage has been configured to use something other than the default value.
    • Transport Port of Mounted Service [Index Disk Configuration and Packet Disk Configuration]: if no value is specified, DMF uses the default value. Specify a value for this if the NFS storage-mounted service has been configured to use something other than the default value.
    • Volume: [Index Disk Configuration and Packet Disk Configuration] - Specify the storage location or path on the NFS server for the index and packets.
    Figure 9. Provision External Storage
  3. Click Save to add the RN configuration to the Controller.
    Note:When editing the configuration of a previously added RN to use external storage versus local storage or vice versa, reboot the RN.

Configuring a Recorder Node Interface

To record packets to a recorder node using a DANZ Monitoring Fabric (DMF) policy, configure a DMF Recorder Node (RN) interface that defines the switch and interface in the monitoring fabric where the RN is connected. The DMF RN interface is referenced by name in the DMF policy as the destination for traffic matched by the policy. To configure a DMF RN interface, perform the following steps:
  1. Click the provision control (+) at the top of the Recorder Node Interfaces table. The system displays the following page:
    Figure 10. Create DMF Recorder Node Interface
  2. Assign a name for the DMF RN interface in the Name field.
  3. Select the switch containing the interface that connects the RN to the monitoring fabric.
  4. Select the interface that connects the RN to the monitoring fabric.
  5. (Optional) Type information about the interface in the Description field.
  6. Click Save to add the configuration to the DMF Controller.

Using the GUI to Assign a Recorder Interface to a Policy

To forward traffic to a Recorder Node (RN), include one or more RN interfaces as a delivery interface in a DANZ Monitoring Fabric (DMF) policy.

When creating a new policy or editing an existing policy, select the RN interfaces from the Monitoring > Policies dialog, as shown in the following screen.
Figure 11. DMF Policies
Note: To create an RN interface, proceed to the Monitoring > Recorder Nodes page and click the + in the Interface section.
To create a policy, select Destination Tools > Add Ports(s) and use the RN Fabric Interface to select a previously configured RN interface. Select or drag the Interfaces or Recorder Nodes to add Destination Tools.
Figure 12. Recorder Node - Create Policy
Figure 13. Add Recorder Nodes
Note: The RN interface can only be selected and not created in the create policy dialogue.

Using the GUI to Define a Recorder Query

The Recorder Node (RN) records all the packets received on a filter interface that match the criteria defined in a DANZ Monitoring Fabric (DMF) policy. Recorded packets can be recalled from or analyzed on the RN using a variety of queries. Use the options in the RN Query section to create a query and submit it to the RN for processing. The following queries are supported:
  • Window: Retrieves the timestamps of the oldest and most recent packets recorded on the recorder.
  • Size: Provides the number of packets and their aggregate size in bytes that match the filter criteria specified.
  • Application: Performs deep packet inspection to identify applications communicating with the packets recorded and that match the filter criteria specified.
  • Packet-data: Retrieves all the packets that match the filter criteria specified.
  • Packet-object: The packet object query extracts unencrypted HTTP objects from packets matching the given stenographer filter.
  • HTTP, HTTP Request, and HTTP Stat: Analyzes HTTP packets, extracting request URLs, response codes, and statistics.
  • DNS: Analyzes any DNS packets, extracting query and response metadata.
  • Replay: Replays selected packets and transmits them to the specified delivery interface.
  • IPv4: Identifies and dissects distinct IPv4 flows.
  • IPv6: Identifies and dissects distinct IPv6 flows.
  • TCP: Identifies and dissects distinct TCP flows.
  • TCP Flow Health: Analyzes TCP flows for information such as maximum RTT, retransmissions, throughput, etc.
  • UDP: Identifies and dissects distinct UDP flows.
  • Hosts: Identifies all the unique hosts that match the filter criteria specified.
  • RTP Stream: Characterizes the performance of Real Time Protocol streaming packets.
After making a selection from the Query Type list, the system displays additional fields to further filter the retrieved results, as shown below:
Figure 14. Packet Recorder Node Query
Use the following options to specify the packets to include in the query:
  • Relative Time: A time range relative to the current time in which look for packets.
  • Absolute Time: A specific time range in which to look for packets.
  • Any IP: Include packets with the specified IP address in the IP header (either source or destination).
  • Directional IP: Include packets with the specified source and/or destination IP address in the IP header.
  • Src Port: Include packets with the specified protocol port number in the Src Port field in the IP header.
  • Dst Port: Include packets with the specified protocol port number in the Dst Port field in the IP header.
  • IP Protocol: Select the IP protocol from the selection list or specify the numeric identifier of the protocol.
  • Community ID:Select packets with a specific BRO community ID string.
  • Src Mac: Select packets with a specific source MAC address.
  • Dst Mac: Select packets with a specific destination MAC address.
  • VLAN: Select packets with a specific VLAN ID.
  • Filter Interfaces: Click the provision (+) control and, in the dialog that appears, enable the checkbox for one or more filter interfaces to restrict the query to those interfaces. To add interfaces to the dialog, click the provision (+) control on the dialog and select the interfaces from the list that is displayed.
  • Policies: Click the provision (+) control and, in the dialog that appears, enable the checkbox for one or more policies to restrict the query to those policies. To add policies to the dialog, click the provision (+) control on the dialog and select the policies from the list that is displayed.
  • Max Bytes: This option is only available for packet queries. Specify the maximum number of bytes returned by a packet query in a PCAP file.
  • Max Packets: This option is only available for packet queries. Specify the maximum number of packets returned by a packet query in a PCAP file.
  • MetaWatch Device ID: Filter packets with the specified MetaWatch device ID.
  • MetaWatch Port ID: Filter packets with the specified MetaWatch port ID.
Alternatively, use Global Query Configuration to set the byte limit on packet query results.
Figure 15. Global Query Configuration

Viewing Query History

View Recorder Node (RN) submitted queries using the GUI or CLI.

To view the query history using the DANZ Monitoring Fabric (DMF) GUI, select Monitoring > Recorder Nodes and scroll down to the Query History section.
Figure 16. Monitoring > Recorder Nodes > Query History

The Query History section displays the queries submitted to each RN and the query status.

To download the query results, select Download Results from the Menu control for a specific query. To export the query history, click the Export control at the top of the table (highlighted in the figure above, to the right of the Refresh control).

To display query history using the CLI, enter the following command:
controller-1> show recorder-node query-history
# Packet Recorder Query Type StartDuration
---|---------------|-----------------------------------------------------------------------|------------------------|----------------------------|--------|
1 HW-PR-2 after 10m ago analysis-hosts 2019-03-20 09:52:38.021000 PDT 3428
2 HW-PR-1 after 10m ago analysis-hosts 2019-03-20 09:52:38.021000 PDT 3428
3 HW-PR-2 after 10m ago abort2019-03-20 09:52:40.439000 PDT 711
4 HW-PR-1 after 10m ago abort2019-03-20 09:52:40.439000 PDT 711
---------------------------------------------------------------------output truncated---------------------------------------------------------------------

Using the CLI to Manage the DMF Recorder Node

Basic Configuration

To perform basic Recorder Node (RN) configuration, perform the following steps:
  1. Assign a name to the RN device.
    controller-1(config)# recorder-node device rn-alias
  2. Set the MAC address of the RN.
    controller-1(config-recorder-node)# mac 18:66:da:fb:6d:b4
    Determine from the chassis ID of connected devices if the management MAC is unknown.
  3. Define the RN interface name.
    controller-1(config)# recorder-fabric interface Intf-alias
    controller-1(config-pkt-rec-intf)#

    Assign any alphanumeric identifier for the recorder node interface name, which changes the submode to config-pkt-rec-intf, to provide an optional description. This submode allows specifying the switch and interface where the RN is connected.

  4. Provide an optional description and identify the switch interface connected to the RN.
    controller-1(config-pkt-rec-intf)# description 'Delivery point for recorder-node'
    controller-1(config-pkt-rec-intf)# recorder-interface switch Switch-z9100 ethernet37
  5. (Optional) Recording: Enabled by default. To disable recording, enter the following commands:
    controller-1(config)# recorder-node device rn-alias
    controller-1(config-recorder-node)# no record
  6. (Optional) Disk Full Policy: By default, Disk Full Policy is set to rolling-fifo, deleting the oldest packets to make room for newer packets when RN disks are full. This configuration can be changed to stop-and-wait, allowing the RN to stop recording until disk space becomes available. Enter the commands below to configure Disk Full Policy to stop-and-wait.
    controller-1(config)# recorder-node device rn-alias
    controller-1(config-recorder-node)# when-disk-full stop-and-wait
  7. Backup Disk Policy: Define the backup disk policy to select the secondary volume and select one of the following three options:
    controller-1(config-recorder-node)# backup-volume
    local-fallback Set local disk as backup when remote disk is unreachable
    no-backupDo not use any backup volume (default selection)
    remote-extendSet remote volume to extend local main disk
    The no-backup mode is the default mode. The other two modes require that the Recorder Node have a set of recording disks and a connection to an Isilon cluster mounted via NFS. Configure this remote storage from the DMF Controller.
  8. (Optional) Max Packet Age: This defines the maximum age in minutes of any packet in the RN. By default, Max Packet Age is unset, which means no limit is enforced. When setting a Max Packet Age, packets recorded on the RN are discarded after the minutes specified. To set the maximum number of minutes that recorded packets will be kept on the RN, enter the following commands:
    controller-1(config)# recorder-node device rn-alias
    controller-1(config-recorder-node)# max-packet-age 30
    This sets the maximum time to keep recorded packets to 30 minutes.
    Note: Combine Max Packet Age with the packet removal policy to control when packets are deleted based on age rather than disk utilization alone.
  9. (Optional) Max Disk Utilization: This defines the maximum disk utilization as a percentage between 5% and 95%. The Disk Full Policy (rolling-fifo or stop-and-wait) is enforced when reaching this value. If unset, the default maximum disk utilization is 95%; however, configure it, as required,using the following commands:
    controller-1(config)# recorder-node device rn-alias
    controller-1(config-recorder-node)# max-disk-utilization 80
  10. (Optional) Disable unused or unneeded indexing configuration fields in subsequent recorder node queries. DMF enables all indexing fields by default. To disable a specific indexing option, enter the following commands from the config-recorder-node-indexing submode. To re-enable a disabled option, enter the command without the no prefix.
    Use the following command to enter the RN indexing submode:
    controller-1(config-recorder-node)# indexing
    controller-1(config-recorder-node-indexing)#
    Use the following commands to disable any unused fields in subsequent queries:
    • Disable MAC Source indexing: no mac-src
    • Disable MAC Destination indexing: no mac-dst
    • Disable outer VLAN ID indexing: no vlan-1
    • Disable inner/middle VLAN ID indexing: no vlan-2
    • Disable innermost VLAN ID indexing: no vlan-3
    • Disable IPv4 Source indexing: no ipv4-src
    • Disable IPv4 Destination indexing: no ipv4-dst
    • Disable IPv6 Source indexing: no ipv6-src
    • Disable IPv6 Destination indexing: no ipv6-dst
    • Disable IP Protocol indexing: no ip-proto
    • Disable Port Source indexing: no port-src
    • Disable Port Destination indexing: no port-dst
    • Disable MPLS indexing: no mpls
    • Disable Community ID indexing: no community-id
    • Disable MetaWatch Device ID: no mw-device-id
    • Disable MetaWatch Port ID: no mw-port-id
    For example, the following command disables indexing for the destination MAC address:
    controller-1(config-recorder-node-indexing)# no mac-src
  11. Identify the RN interface by name in an out-of-band policy.
    controller-1(config)# policy RecorderNodePolicy
    controller-1(config-policy)# use-recorder-fabric-interface intf-1
    controller-1(config-policy)#
  12. Configure the DANZ Monitoring Fabric (DMF) policy to identify the traffic to send to the RN.
    controller-1(config-policy)# 1 match any
    controller-1(config-policy)# # filter-interface FilterInterface1
    controller-1(config-policy)# # action forward
    This example forwards all traffic received in the monitoring fabric on filter interface FilterInterface1 to the RN interface. The following is the running-config for this example configuration:
    recorder-fabric interface intf-1
    description 'Delivery point for recorder-node'
    recorder-interface switch 00:00:70:72:cf:c7:cd:7d ethernet37
    policy RecorderNodePolicy
    action forward
    filter-interface FilterInterface1
    use-recorder-fabric intf-1
    1 match any

Authentication Token Configuration

Static authentication tokens are pushed to each Recorder Node (RN) as an alternative form of authentication in headless mode when the DANZ Monitoring Fabric (DMF) Controller is unreachable or by third-party applications that do not have or do not need DMF controller credentials to query the RN.

To configure the RN with a static authentication token, use the following commands:
controller-1(config)# recorder-node auth token mytoken
Auth : mytoken
Token : some_secret_string <--- secret plaintext token displayed once here
controller-1 (config)# show running-config recorder-node auth token
! recorder-node
recorder-node auth token mytoken $2a$12$cwt4PvsPySXrmMLYA.Mnyus9DpQ/bydGWD4LEhNL6xhPpkKNLzqWS <---hashed token shows in running config
The DMF Controller uses its hidden authentication token to query the RN. To regenerate the Controller authentication token, use the following command:
controller-1(config)# recorder-node auth generate-controller-token

Configuring the Pre-buffer

To enable the pre-buffer or change the time allocated, enter the following commands:
controller-1(config)# recorder-node device <name>
controller-1(config-recorder-node)# pre-buffer <minutes>

Replace name with the recorder node name. Replace minutes with the number of minutes to allocate to the pre-buffer.

Triggering a Recorder Node Event

To trigger an event for a specific Recorder Node (RN), enter the following command from enable mode:

controller-1# trigger recorder-node <name> event <event-name>

Replace name with the RN name and replace event-name with the name to assign to the current event.

Terminating a Recorder Node Event

To terminate a Recorder Node (RN) event, use the following command:
controller-1# terminate recorder-node <name> event <event-name>

Replace name with the RN name and replace event-name with the RN event name to terminate.

Viewing Recorder Node Events

To view recorder node events, enter the following command from enable mode:
controller-1# show recorder-node events
# Packet Recorder Time Event
-|---------------|------------------------------|-------------------------------------------------------------------|
1 pkt-rec-740 2018-02-06 16:21:37.289000 UTC Pre-buffer event my-event1 complete. Duration 3 minute(s)
2 pkt-rec-740 2018-02-06 20:23:59.758000 UTC Pre-buffer event event2 complete. Duration 73 minute(s)
3 pkt-rec-740 2018-02-07 22:39:15.036000 UTC Pre-buffer event event-02-7/event3 complete. Duration 183 minute(s)
4 pkt-rec-740 2018-02-07 22:40:15.856000 UTC Pre-buffer event event5 triggered
5 pkt-rec-740 2018-02-07 22:40:16.125000 UTC Pre-buffer event event4/event-02-7 complete. Duration 1 minute(s)
6 pkt-rec-740 2018-02-22 06:53:10.216000 UTC Pre-buffer event triggered

Using the CLI to Run Recorder Node Queries

Note: The DANZ Monitoring Fabric (DMF) Controller prompt is displayed immediately after entering a query or replay request, but the query continues in the background. Attempting to enter another replay or query command before the previous command is completed, an error message is displayed.

Packet Replay

Enter the replay recorder-node command from enable mode to replay the packets recorded by a Recorder Node (RN).
controller-1# replay recorder-node <name> to-delivery <interface> filter <stenographer-query>
[realtime | replay-rate <bps> ]
The following are the options available with this command.
  • name: Specify the RN from which to replay the recorded packets.
  • interface: The DMF delivery interface name receiving the packets.
  • stenographer-query: The filter used to look up desired packets.
  • (Optional) real-time: Replay the packets at the original rate recorded by the specified RN. The absence of this parameter will result in a replay up to the line rate of the RN interface.
  • (Optional) replay-rate bps: Specify the number of bits per second used for replaying the packets recorded by the specified RN. The absence of this parameter will result in a replay up to the line rate of the RN interface.
The following command shows an example of a replay command using the to-delivery option.
controller-1# replay recorder-node packet-rec-740 to-delivery eth26-del filter 'after 1m ago'
controller-1#
Replay policy details:
controller-1# show policy-flow | grep replay
1 __replay_131809296636625 packet-as5710-2 (00:00:70:72:cf:c7:cd:7d) 0 0 6400 1
in-port 47 apply: name=__replay_131809296636625 output: max-length=65535, port=26

Packet Data Query

Use a packet query to search the packets recorded by a specific Recorder Node (RN). The operation uses a Stenographer query string to filter only the interesting traffic. The query returns a URL to download and analyze the packets using Wireshark or other packet-analysis tools.

From enable mode, enter the query recorder-node command.
switch # query recorder-node <name> packet-data filter <stenographer-query>
The following is the meaning of each parameter:
  • name: Identify the RN.
  • packet-data filter stenographer-query: Look up only the packets that match the specified Stenographer query.
The following example illustrates the results returned:

Packet Object Query

The packet object query extracts unencrypted HTTP objects from packets matching the given stenographer filter. To run a packet object query, run the following query command:
switch# query recorder-node bmf-integrations-pr-1 packet-object filter 'after 5m ago'
The following example illustrates the results returned:
switch# query recorder-node bmf-integrations-pr-1 packet-object filter 'after 1m ago'
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Packet Object Query Results ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Coalesced URL : /pcap/__packet_recorder__/coalesced-bmf-2022-11-21-14-27-56-67a73ea9.tgz
Individual URL(s) : /pcap/__packet_recorder__/bmf-integrations-pr-1-2022-11-21-14-27-55-598f5ae7.tgz

Untar the folder to extract the HTTP objects.

Size Query

Use a size query to analyze the number of packets and the total size recorded by a specific Recorder Node (RN). The operation uses a Stenographer query string to filter only the interesting traffic.

Enter the query recorder-node command from enable mode to run a size query.
# query recorder-node <name> size filter <stenographer_query>
The following is the meaning of each parameter:
  • name: Identify the RN.
  • size filter stenographer-query: Analyze only the packets that match the specified Stenographer query.
The following example illustrates the results returned:
switch# query recorder-node <hq-bmf-packet-recorder-1> size filter "after 1m ago and src host 8.8.8.8"
~ Summary Query Results ~
# Packets : 66
Size: 7.64KB
~ Error(s) ~
None.

Window Query

Use a window query to analyze the oldest and most recent available packets recorded by a specific Recorder Node (RN).

Enter the query recorder-node command from enable mode to run a window query.

switch# query recorder-node <name> window
The following is the meaning of each parameter:
  • name: Identify the RN.
The following example illustrates the results returned:
switch# query recorder-node hq-bmf-packet-recorder-1 window
~~~~~~~~~~~~~ Window Query Results ~~~~~~~~~~~~~
Oldest Packet Available : 2020-07-30 05:01:08 PDT
Newest Packet Available : 2020-10-19 08:14:21 PDT
~ Error(s) ~
None.

Stopping a Query

Use the abort recorder-node command to stop the query running on the specified Recorder Node (RN). From enable mode, enter the following command:
controller-1# abort recorder-node <name> filter <string>
Replace name with the RN name, and use the filter keyword to identify the specific filter used to submit the query. If the specific running query is unknown, use an empty-string filter of "" to terminate any running query.
controller-1# abort recorder-node hq-bmf-packet-recorder-1 filter ""
Abort any request with the specified filter? This cannot be undone. enter "yes" (or "y") to
continue:
yes
Result : Success
~ Error(s) ~
None.

Using RBAC to Manage Access to the DMF Recorder Node

Use Role-Based Access Control (RBAC) to manage access to the DANZ Monitoring Fabric (DMF) Recorder Node (RN) by associating the RN with an RBAC group.

To restrict access for a specific RN to a specific RBAC group, use the CLI or GUI as described below.

RBAC Configuration Using the CLI

  1. Identify the group to associate the Recorder Node (RN).
    Enter the following command from config mode on the active DANZ Monitoring Fabric (DMF) controller:
    controller-1(config)# group test
    controller-1(config-group)#
  2. Associate one or more RNs with the group.
    Enter the following CLI command from the config-group submode:
    controller-1(config-group)# associate recorder-node <device-name>
    Replace device-name name with the RN name, as in the following example:
    controller-1(config-group)# associate recorder-node HW-PR-1

RBAC Configuration Using the GUI

  1. Select Security > Groups, and select Edit from the Actions and click + Create Group.
    Figure 17. Create Security Group
  2. Enter a Group Name.
    Figure 18. Create Group
  3. Under the Role Based Access Control section select Add Recorder Node.
  4. Select the Recorder Node from the selection list, and assign the permissions required.
    • Read: The user can view recorded packets.
    • Use: The user can define and run queries.
    • Configure: The user can configure packet recorder instances and interfaces.
    • Export: The user can export packets to a different device.
    Figure 19. Associate Recorder Node
  5. Click Create.

Using the CLI to View Information About a Recorder Node

This section describes monitoring and troubleshooting the Recorder Node (RN) status and operation. The RN stores packets on the main hard disk and the indices on the SSD volumes.

Viewing the Recorder Node Interface

To view information about the RN interface information, use the following command:
controller-1(config)# show topology recorder-node
# DMF IF Switch IFName State SpeedRate Limit
-|------------|----------|----------|-----|------|----------|
1 RecNode-Intf Arista7050 ethernet1up25Gbps -

Viewing Recorder Node Operation

controller-1# show recorder-node device packet-rec-740 interfaces stats
Packet Recorder Name Rx Pkts Rx BytesRx DropRx Errors Tx PktsTx Bytes Tx Drop Tx Errors
---------------|----|-------------|---------------|--------|---------|--------|----------|-------|---------|
packet-rec-740pri1 2640908588614 172081747460802 84204084 0 24630503 3053932660 0 0
Information about a Recorder Node (RN) interface used as a delivery port in a DANZ Monitoring Fabric (DMF) out-of-band policy appears in a list. It lists RN interfaces as dynamically added delivery interfaces.
Ctrl-2(config)# show policy PR-policy 
Policy Name                            : PR-policy
Config Status                          : active - forward
Runtime Status                         : installed
Detailed Status                        : installed - installed to forward
Priority                               : 100
Overlap Priority                       : 0
# of switches with filter interfaces   : 1
# of switches with delivery interfaces : 1
# of switches with service interfaces  : 0
# of filter interfaces                 : 1
# of delivery interfaces               : 1
# of core interfaces                   : 0
# of services                          : 0
# of pre service interfaces            : 0
# of post service interfaces           : 0
Push VLAN                              : 1
Post Match Filter Traffic              : 1.51Gbps
Total Delivery Rate                    : 1.51Gbps
Total Pre Service Rate                 : -
Total Post Service Rate                : -
Overlapping Policies                   : none
Component Policies                     : none
Installed Time                         : 2023-09-22 12:16:55 UTC
Installed Duration                     : 3 days, 4 hours
~ Match Rules ~
# Rule        
-|-----------|
1 1 match any

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Filter Interface(s)  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# DMF IF      Switch              IF Name   State Dir Packets     Bytes          Pkt Rate Bit Rate Counter Reset Time             
-|-----------|-------------------|---------|-----|---|-----------|--------------|--------|--------|------------------------------|
1 Lab-traffic Arista-7050SX3-T3X5 ethernet7 up    rx  97831460642 51981008309480 382563   1.51Gbps 2023-09-22 12:16:55.738000 UTC

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Delivery Interface(s) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# DMF IF          Switch              IF Name    State Dir Packets     Bytes          Pkt Rate Bit Rate Counter Reset Time             
-|---------------|-------------------|----------|-----|---|-----------|--------------|--------|--------|------------------------------|
1 PR-intf Arista-7050SX3-T3X5 ethernet35 up    tx  97831460642 51981008309480 382563   1.51Gbps 2023-09-22 12:16:55.738000 UTC

~ Service Interface(s) ~
None.

~ Core Interface(s) ~
None.

~ Failed Path(s) ~
None.
Ctrl-2(config)# 

Viewing Errors and Warnings

The following table lists the errors and warnings a recorder node may display. In the CLI, display these errors and warnings by entering the following commands:
  • show fabric errors
  • show fabric warnings
  • show recorder-node errors
  • show recorder-node warnings
Table 1. Errors and Warnings
Type Condition Cause Resolution
Error Recorder Node (RN) management link down. RN has not received controller LLDP. Wait 30s if the recorder node is newly configured. Verify it is not connected to a switch port that is a DANZ Monitoring Fabric (DMF) interface.
Error RN fabric link down. Controller has not received RN LLDP. Wait 30s if recorder node is newly configured. Check it is online otherwise.
Warning Disk/RAID health degraded. Possible hardware degradation. Investigate specific warning reported. Could be temperature issue. Possibly replace indicated disk soon.
Warning Low disk space. Packet or index disk space has risen above threshold. Prepare for disk full soon.
Warning Disk full. Packet or index disk space is full. Packets are being dropped or rotated depending on removal policy. Do nothing if removal policy is rolling-FIFO. Consider erasing packets to free up space otherwise.
Warning Recorder misconfiguration on a DMF interface. A recorder node has been detected in the fabric on a switch interface that is configured as a filter or delivery interface. Remove the conflicting interface configuration, or re-cable the recorder node to a switch interface not defined as a filter or delivery interface.

Using the GUI to View Recorder Node Statistics

View Recorder Node (RN) statistics by clicking the RN alias from the Monitoring > Recorder Nodes page.

Figure 20. DANZ Monitoring Fabric (DMF) List of Connected Recorder Nodes
Click a Recorder Node to display the available recorder node statistics. All statistics are disabled or hidden by default.
Figure 21. Available Recorder Node Statistics
Enable or view statistics by clicking on them. Selected statistics appear as highlighted in blue.
Figure 22. Selected Recorder Node Statistics

The RN shows health statistics for the following:

CPU: CPU health displays the compute resource utilization of the recorder node.

Figure 23. Recorder Node CPU Health Statistics

Memory: DMF displays memory-related stats such as total memory, used, free, available, etc.

Figure 24. Recorder Node Memory Statistics
Storage: Storage health displays the storage utilization percentage and total and available capacity of Index and Packet virtual disks.
Figure 25. Recorder Node Storage Statistics
Time-based Disk Utilization Statistics: Time-based Disk Utilization Statistics provides an estimated time period until the Index and Packet virtual disks reach full storage capacity. This estimate is calculated based on data points (incoming data rate) collected periodically from the RN for a certain duration.
Note: Inaccurate data may appear if the collected data points are insufficient to calculate the disk-full estimate. However, once sufficient data points are collected, the estimate recalculates and updates automatically.
Figure 26. Time-based Disk Utilization Statistics

Virtual Disks: Virtual Disk's health stats display the Index and Packet virtual disks' size, state, health, and RAID level configuration.

Figure 27. Recorder Node Virtual Disk Details
Click on the drop-down arrow next to the virtual disk name to obtain information regarding participating physical disks, such as slot numbers, type, size, state, temperature, and Dell’s Self Monitoring Analysis and Report Technology (SMART) stats, such as errors and failures, if any.
Figure 28. Recorder Node Virtual Disk Statistics
File Descriptors: the File Descriptor section displays the following:
  • File Descriptors (current): Current number of files open in the entire system.
  • Max System File Descriptors: Highest number of open files allowed on the entire system.
  • Max Stenographer File Descriptors: Highest number of open files allowed for Stenographer application.
Figure 29. Recorder Node File Descriptors Statistics
Mount: The Mount section displays the Index and Packet disk mount information, such as volume name, mount point, file system type, and mount health.
Figure 30. Recorder Node Mount Information

Stenographer: Stenographer Statistics are displayed as follows:

Figure 31. Recorder Node Stenographer Statistics
  • Initialized: Displays the Stenographer application running state. A green check mark indicates the application was successfully initialized. When the Stenographer application starts, a red x mark appears and disallows recording and querying during this time, which is expected behavior.
  • Tracked Files: Tracked files are the total number of files stored under each CPU instance thread.
  • Cached Files: Cached files are the number of open files with a file descriptor.
  • Max Cached Files: The maximum cached files are the total openable files allowed.
These numbers are further divided and displayed for each recording thread and viewable in the Recording Threads table:
Figure 32. Recorder Node Max Cached Files Statistics
Recording: Recording stats displays packet stats, such as dropped packets, total packets, and collection time for each CPU core.
Figure 33. Recorder Node Statistics
The following displays packet size distribution stats.
Figure 34. Recorder Node Packet Size Distribution Statistics
The following displays interface errors, such as CRC errors, frame length errors, and back pressure errors:
Figure 35. Recorder Node Interface Errors

Changing the Recorder Node Default Configuration

Configuration settings are automatically downloaded to the Recorder Node (RN) from the DANZ Monitoring Fabric (DMF) Controller, eliminating the need for box-by-box configuration. However, the option exists to override the default configuration for the RN from the config-recorder-node submode for any RN.
Note:These options are available only from the CLI, not the DMF Controller GUI.
To change the CLI mode to config-recorder-node, enter the following command from config mode on the active DMF controller:
controller-1(config)# recorder-node device <instance>

Replace instance with the alias to use for the RN. This alias is affiliated with the MAC hardware address using the mac command.

Use any of the following commands from the config-recorder-node submode to override the default configuration for the associated RN:
  • banner: Set the RN pre-login banner message
  • mac: Configure the MAC address for the RN
Additionally, the option exists to override the configurations shown below to use values specific to the RN or used in a merge-mode along with the configuration inherited from the DMF controller:
  • ntp: Configure RN to override default timezone and NTP parameters.
  • snmp-server: Configure RN SNMP parameters and traps.
  • logging: Enable RN logging to Controller.
  • tacacs: Set TACACS defaults, server IP address(es), timeouts and keys.
Use the following commands from the config-recorder-node submode to change the default configuration on the RN:
  • ntp override-global: Override global time configuration with RN time configuration.
  • snmp-server override-global: Override global SNMP configuration with RN SNMP configuration.
  • snmp-server trap override-global: Override global SNMP trap configuration with RN SNMP trap configuration.
  • logging override-global: Override global logging configuration with packet recorder logging configuration.
  • tacacs override-global: Override global TACACS configuration with RN TACACS configuration.
To configure the RN to work in a merge mode by merging its specific configuration with that of the DMF Controller, execute the following commands in the config-recorder-node submode:
  • ntp merge-global: Merge global time configuration with RN time configuration.
  • snmp-server merge-global: Merge global SNMP configuration with RN SNMP configuration.
  • snmp-server trap merge-global: Merge global SNMP trap configuration with RN SNMP trap configuration.
  • logging merge-global: Merge global logging configuration with RN logging configuration.

TACACS configuration does not have a merge option. It can either be inherited from the DMF Controller or overridden to use only the RN-specific configuration.

Large PCAP Queries

Access the RN via a web browser to run large PCAP queries to the Recorder Node (RN). This allows running packet queries directly to the RN without specifying the maximum byte or packet limit for the PCAP file (which is required when executing the query from the DANZ Monitoring Fabric (DMF) Controller).

To access the RN directly, use the URL https://RecorderNodeIP in a web browser, as shown below:
Figure 36. URL to Recorder Node
The following page will be displayed:
Figure 37. Recorder Node Page
  • Recorder Node IP Address: Enter the target RN IP address.
  • DMF Controller Username: Provide the DMF Controller username.
  • DMF Controller Password: Provide the password for authentication.
  • Stenographer Query Filter: Use the query filter to filter the query results to look for specific packets. For example, to search for packets with a source IP address of 10.0.0.145 in the last 10 minutes, use the following filter:
    after 10m ago and src host 10.0.0.145
  • Stenographer Query ID: Starting in DMF 8.0, a Universally Unique Identifier (UUID) is required to run queries. To generate a UUID, run the following command on any Linux machine and use the result as the Stenographer query ID:
    $ uuidgen
    b01308db-65f2-4d7c-b884-bb908d111400
  • Save pcap as: Provide the file name for this PCAP query result.
  • Submit Request: Sends a query to the specified RN and saves the PCAP file with the provided file name to the default download location for the browser.

Recorder Node Management Migration L3ZTN

After completing the first boot (initial configuration), remove the Recorder Node (RN) from the old Controller and point it to a new Controller via the CLI in the case of a Layer-3 topology mode.
Note:For appliances to connect to the DANZ Monitoring Fabric (DMF) Controller in Layer-3 Zero Touch Network (L3ZTN) mode, configure the DMF Controller deployment mode as pre-configure.

To migrate management to a new Controller, follow the steps below:

  1. Remove the RN and switch from the old Controller using the commands below:
    controller-1(config)# no recorder-node device <RecNode>
    controller-1(config)# no switch <Arista7050>
  2. Add the switch to the new Controller.
  3. SSH to the RN and configure the new Controller IP using the zerotouch l3ztn controller-ip command:
    controller-1(config)# zerotouch l3ztn controller-ip 10.2.0.151
  4. After pointing the RN to use the new Controller, reboot the RN.
  5. Once the RN is back online, the DMF Controller receives the ZTN request.

  6. After the DMF Controller has received a ZTN request from the RN, add it to the DMF Controller running-configuration using the below command:
    controller-1(config)# recorder-node device RecNode
    controller-1(config-recorder-node)# mac 24:6e:96:78:58:b4
  7. Verify the addition of the RN to the new DMF Controller using the command below:

Recorder Node CLI

The following commands are available from the Recorder Node (RN):

Use the show version command to view the version and image information that RN is running on.
RecNode(config)# show version
Controller Version : DMF Recorder Node 8.1.0 (bigswitch/enable/dmf-8.1.x #5)
RecNode(config)#
Use the show controllers command to view the connected DANZ Monitoring Fabric (DMF) controllers to the recorder node.
Note: All cluster nodes appear in the command output if the RN is connected to a DMF Controller cluster.
RecNode(config)# show controllers
controllerRole State Aux
---------------------|------|---------|---|
tcp://10.106.8.2:6653 master connected 0
tcp://10.106.8.3:6653 slaveconnected 0
tcp://10.106.8.3:6653 slaveconnected 1
tcp://10.106.8.3:6653 slaveconnected 2
tcp://10.106.8.2:6653 master connected 1
tcp://10.106.8.2:6653 master connected 2
RecNode(config)#

Multiple Queries

Use the GUI to run multiple Recorder Node (RN) queries.

To run queries on recorded packets by the RN, navigate to the Monitoring > Recorder Nodes page.

Under the Query section, click on the Query Type drop-down to select the type of analysis to run on the recorded packets, as shown below:
Figure 38. Query Type
After selecting the query type, use filters to limit or narrow the search to obtain specific results. Providing specific filters also helps to complete the query analysis faster. In the following example, the query result for the TCP query type will return the results for IP address 10.240.30.24 for the past 10 minutes.
Figure 39. Query - IP Address and Time
After entering the desired filters, click on the Submit button. The Progress dialog will be displayed, showing the Elapsed Time and Progress percentage of the running query:
Figure 40. Query Progress
While a query is in progress, initiate another query from a new DANZ Monitoring Fabric (DMF) Controller web session. View the query progress under the Active Queries section:
Figure 41. Active Queries

Ability to Deduplicate Packets - Query from Recorder Node

For Recorder Node queries, the recorded packets matching a specified query filter may contain duplicates when packet recording occurs at several different TAPs within the same network; i.e., as a packet moves through the network, it may be recorded multiple times. The dedup feature removes duplicate packets from the query results. By eliminating redundant information, packet deduplication improves query results' clarity, accuracy, and conciseness. Additionally, the dedup feature significantly reduces the size of query results obtained from packet query types.

Using the CLI to Deduplicate Packets

In the DANZ Monitoring Fabric (DMF) Controller CLI, packet deduplication is available for the packet data, packet object, size, and replay query types. Deduplication is off by default for these queries. Add the dedup option to the end of the query command after all optional values (if any) have been selected to enable deduplication.

The following are command examples of enabling deduplication.

Enabling deduplication for a size query:

controller# query recorder-node rn size filter “before 5s ago” dedup

Enabling deduplication for a packet data query specifying a limit for the size of the PCAP file returned in bytes:

controller# query recorder-node rn packet-data filter “before 5s ago” limit-bytes 2000 dedup

Enabling deduplication for a replay query:

controller# replay recorder-node rn to-delivery dintf filter “before 5s ago” dedup

Enabling deduplication for a replay query specifying the replay rate:

controller# replay recorder-node rn to-delivery dintf filter “before 5s ago” replay-rate 100 dedup

Specify a time window (in milliseconds) for deduplication. The time window defines the time required between timestamps of identical packets to no longer be considered duplicates of each other. For example, for a time window of 200 ms, two identical packets with timestamps that are 200 ms (or less) apart are duplicates of each other. In contrast, if the two identical packets had timestamps more than 200 ms apart, they would not be duplicates of each other.

The time window must be an integer between 0 and 999 (inclusive) with a default time window of 200 ms when deduplication is enabled and no set time window value.

To configure a time window value, use the dedup-window option followed by an integer value for the time window after the dedup option.

controller# query recorder-node rn size filter “before 5s ago” dedup dedup-window 150

Using the GUI to Deduplicate Packets

In the DANZ Monitoring Fabric (DMF) Controller GUI, packet deduplication is available for the packet data, packet object, size, replay, application, and analysis query types. Deduplication is off by default for these queries. To enable deduplication, perform the following steps:
  1. Set the toggle switch deduplication to Yes in the query submission window.
  2. Specify an optional time window (in milliseconds) as required by entering an integer between 0 and 999 (inclusive) into the Deduplication Time Window field. The time window will default to 200 ms if the time window value is unset.
  3. Click Submit to continue.
Note: If a time window value is specified but deduplication is off, packet deduplication will not occur.
The following example illustrates enabling deduplication for a size query specifying a time window value.
Figure 42. Query

Limitations

Expect a query with packet deduplication enabled to take longer to complete than packet deduplication disabled. Hence, packet deduplication, by default, is off.

The maximum time window value permitted is 999 ms to ensure that TCP retransmissions are not regarded as duplicates, assuming that the receive timeout value for TCP retransmissions (of any kind) is at least 1 second. If the receive timeout value is less than 1 second (particularly, exactly 999 ms or less), then it is possible for TCP retransmissions to be regarded as duplicates when the time window value used is larger than the receive timeout value.

Due to memory constraints, removing some duplicates may not occur as expected. This scenario is likely to occur if a substantial amount of packets match the query filter, which all have timestamps within the specified time window from each other. We refer to this scenario as the query having exceeded the packet window capacity. To mitigate this from occurring, decrease the time window value or use a more specific query filter to reduce the number of packets matching the query filter at a given time.

Enabling Egress sFlow® on Recorder Node Interfaces

Enable egress sFlow®* to sample traffic sent to any DANZ Monitoring Fabric (DMF) Recorder Node (RN) attached to the fabric. Examining these sampled packets on a configured sFlow collector allows the identification of post-match-rule flows recorded by the RNs without performing a query against the RNs. While not explicitly required, Arista Networks highly recommends using the DMF Analytics Node (AN) as the configured sFlow collector, as it can automatically identify packets sampled utilizing this feature.

Platform Compatibility

All platforms apart from the following series:

  • DCS-7280R
  • DCS-7280R2
  • DCS-7500R
  • DCS-7020
  • DCS-7050X4

Configuration

Use the DMF CLI or the GUI to turn the feature on or off.

Using the CLI to Enable Egress sFlow

The egress sFlow feature requires a configured sFlow collector. After configuring the sFlow collector, enter the following command from the config mode to enable the feature:

Controller-1(config)# recorder-node sflow

To disable the feature, enter the command:

Controller-1(config)# no recorder-node sflow

Using the GUI to Enable Egress sFlow

Using the GUI

After configuring the fabric for sFlow and setting up the sFlow collector, navigate to the Monitoring > Recorder Node page. Under the Global Configuration section, click the Configure global settings button.

Figure 43. Configure Global Settings

In the Configure Global Settings pop-up window, enable the sFlow setting and click Submit.

Figure 44. Enable sFlow

Analytics Node

When using a DMF Analytics Node as the sFlow collector, it has a dashboard to display the results from this feature. To access the results:

  1. Navigate to the sFlow dashboard from the Fabric dashboard.
  2. Select the disabled RN Flows filter.
  3. Select the option to Re-enable the filter, as shown below.
Figure 45. Re-enable sFlow

Troubleshooting Egress sFlow Configurations

Switches not affiliated with a sFlow collector (either a global sFlow collector or a switch-specific sFlow collector) do not have an active feature even if the feature is enabled. Ensure the fabric is set up for sFlow and a configured sFlow collector exists. To verify that a configured global sFlow collector exists, use the command:

Controller-1# show sflow default 

A configured collector appears as an entry in the table under the column labeled collector. Alternatively, to verify a configured collector exists for a given switch, use the command:

Controller-1#show switch switch-name table sflow-collector

This command displays a table with one entry per configured collector.

A feature-unsupported-on-device warning appears when connecting an unsupported switch to an RN. The feature does not sample packets passing to an RN from an unsupported switch. View any such warnings using the GUI or using the following CLI command:

Controller-1#show fabric warnings feature-unsupported-on-device

To verify the feature is active on a given switch, use the command:

Controller-1#show switch switch-name table sflow-sample

If the feature is enabled, the entry values associated with the ports connected to an RN would include an EgressSamplingRate(number) with a number greater than 0. The following example illustrates Port(1) on <switch-name> connecting to an RN.

Controller-1# show switch <switch-name> table sflow-sample
#Sflow-sample Device nameEntry key Entry value
--|------------|---------------|---------|----------------------------------------------------------------------------------|
5352 <switch-name>Port(1) SamplingRate(0), EgressSamplingRate(10000), HeaderSize(128), Interval(10000)

Guidelines and Limitations for Enabling Egress sFlow

Consider the following guidelines and limitations while enabling Egress sFlow:

  • The Egress sFlow support for the Recorder Nodes (RN) feature requires a configured sFlow collector in a fabric configured to allow sFlows.
  • If a packet enters a switch through a filter interface with sFlow enabled and exits through a port connected to an RN while the feature is enabled, only one sFlow packet (i.e., the ingress sFlow packet) is sent to the collector.
  • The Egress sFlow feature does not identify which RN has recorded a given packet in a fabric when there are multiple RNs. This is fine in a normal case as the queries are issued to the RNs in aggregate rather than to individual RNs, and hence, the information that any RN has received a packet is sufficient. In some cases, it may be possible to make that determination from the outport of the sFlow packet, but that information may not be available in all cases. This is an inherent limitation of egress sFlow.
  • An enabled egress sFlow feature captures the packets sent to any RN with recording enabled, regardless of whether the RN is actively recording or not.

Recorder Node Recording State API

The Recorder Node (RN) recording statistics API on the DANZ Monitoring Fabric (DMF) Controller includes information about the operational state of the ongoing recording.

The information is available in device/state/packet-recorder/recording/recording-state where recording-state is the container holding the opstate information.

Note: This is a simplified version of the path; the path requires the name and type to be provided at the device to select a specific RN device to display state information.

Within the container, there are the following two values:

  • state: an enumeration describing the recording opstate that has one of the following values: initializing, ready, active, and stopped.
  • description: a string describing the state in more detail.

The following table lists the possible recording state values and describes how to interpret each scenario.

state description Interpretation
initializing Recording application syncing files, {}% complete.

The application is synchronizing previously recorded packets, so it cannot record yet.

The formatted value is the percentage of synchronization completed so far.

initializing Recording application is starting up. The application has finished synchronization and is now completing the setup for recording.
ready Ready to record, no recordable traffic received. No traffic has been received after the application completed initializing.
active Recording traffic. Traffic is being received and recorded to disk.
stopped Recording threads not running. Recording threads were interrupted or terminated, so recording has stopped.
stopped Recording not enabled. Recording is not enabled, and packets are not recorded.
stopped Disk space exhausted with stop-and-wait mode configured. Awaiting user input. Recording has stopped as disk space has been exhausted, and the removal policy has been configured as stop-and-wait. Delete packets to resume recording.
stopped Recording state not found. The recording application isn’t running, so there is no recording state information.
When the state is not active, an RN warning is generated on the DMF Controller to indicate that a nonactive RN is connected to the fabric.
Note: Any other state than active is considered to be inactive. This is not necessarily indicative of a problem. However, you should take action accordingly using the provided interpretations.

Additionally, the type of logical disk volume used by each recording thread is part of the RN recording statistics API on the DMF Controller.

This information is available in device/state/packet-recorder/recording/recording-thread/disk where a disk is an enumeration having one of two values, primary or backup, if the thread is recording to the primary or backup configured logical disk volume, respectively. This is a simplified version of the path. The disk type does not imply whether the disk volume is local or remote, as the storage configuration determines this.

Show Commands

The show recorder-node device <rn name> recording-health command includes the state from the recording state and the disk type for each recording thread. Use the details option of this command to view the description of the recording state in addition to the previously stated information.

The following examples highlight the additional information in bold for illustrative purposes.
C1> show recorder-node device <rn name> recording-health
~~~~~~~~~~~~~~~~ Recording Application~~~~~~~~~~~~~~~~
Recording Collection Time: 2024-04-23 15:12:30 UTC
Recording CRC Errors : 0
Recording Frame Length Errors: 0
Recording Back Pressure Errors : 0
Recording Undersized Frames: 0
Recording Frames 64B : 0
Recording Frames 65-127B : 0
Recording Frames 128-255B: 0
Recording Frames 256-511B: 0
Recording Frames 512-1023B : 0
Recording Frames 1024-1522B: 0
Recording Frames 1523-9522B: 0
Recording Oversized Frames : 0
Recording state: Active

~~~~~~~~~~~~~~~~~~~~~~~ Recording Instance~~~~~~~~~~~~~~~~~~~~~~~
Core DiskDropped Packets Total Packets Collection Start Time
----|-------|---------------|-------------|-----------------------|
1primary 0 204 2024-04-23 15:00:34 UTC
C1> show recorder-node device <rn name> recording-health details
~~~~~~~~~~~~~~~~ Recording Application~~~~~~~~~~~~~~~~
Recording Collection Time: 2024-04-23 15:12:30 UTC
Recording CRC Errors : 0
Recording Frame Length Errors: 0
Recording Back Pressure Errors : 0
Recording Undersized Frames: 0
Recording Frames 64B : 0
Recording Frames 65-127B : 0
Recording Frames 128-255B: 0
Recording Frames 256-511B: 0
Recording Frames 512-1023B : 0
Recording Frames 1024-1522B: 0
Recording Frames 1523-9522B: 0
Recording Oversized Frames : 0
Recording state: Active
Recording state Details: Recording traffic.

~~~~~~~~~~~~~~~~~~~~~~~ Recording Instance~~~~~~~~~~~~~~~~~~~~~~~
Core DiskDropped Packets Total Packets Collection Start Time
----|-------|---------------|-------------|-----------------------|
1primary 0 204 2024-04-23 15:00:34 UTC
C1> show recorder-node device <rn name> recording-health
~~~~~~~~~~~~~~~~ Recording Application~~~~~~~~~~~~~~~~
Recording Collection Time: 2024-04-23 15:12:30 UTC
Recording CRC Errors : 0
Recording Frame Length Errors: 0
Recording Back Pressure Errors : 0
Recording Undersized Frames: 0
Recording Frames 64B : 0
Recording Frames 65-127B : 0
Recording Frames 128-255B: 0
Recording Frames 256-511B: 0
Recording Frames 512-1023B : 0
Recording Frames 1024-1522B: 0
Recording Frames 1523-9522B: 0
Recording Oversized Frames : 0
Recording state: Active

~~~~~~~~~~~~~~~~~~~~~~~ Recording Instance~~~~~~~~~~~~~~~~~~~~~~~
Core DiskDropped Packets Total Packets Collection Start Time
----|-------|---------------|-------------|-----------------------|
1backup0 204 2024-04-23 15:00:34 UTC
C1> show recorder-node device <rn name> recording-health details
~~~~~~~~~~~~~~~~ Recording Application~~~~~~~~~~~~~~~~
Recording Collection Time: 2024-04-23 15:12:30 UTC
Recording CRC Errors : 0
Recording Frame Length Errors: 0
Recording Back Pressure Errors : 0
Recording Undersized Frames: 0
Recording Frames 64B : 0
Recording Frames 65-127B : 0
Recording Frames 128-255B: 0
Recording Frames 256-511B: 0
Recording Frames 512-1023B : 0
Recording Frames 1024-1522B: 0
Recording Frames 1523-9522B: 0
Recording Oversized Frames : 0
Recording state: Active
Recording state Details: Recording traffic.

~~~~~~~~~~~~~~~~~~~~~~~ Recording Instance~~~~~~~~~~~~~~~~~~~~~~~
Core DiskDropped Packets Total Packets Collection Start Time
----|-------|---------------|-------------|-----------------------|
1backup0 204 2024-04-23 15:00:34 UTC

The show recorder-node warnings command includes nonactive RN warnings if any exist.
C1> show recorder-node warnings

~ RAID Health Warning(s) ~
None.

~ Low Disk Space Warning(s) ~
None.

~ Full Disk Warning(s) ~
None.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Non Active Recorder Node Warning(s) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Recorder Node Message
-|-------------|----------------------------------------------------------------------------------|
1 rn1 Inactive Recording State: Ready - Ready to record, no recordable traffic received.

Limitations

The ready state only occurs after the recording application has finished initializing if no recordable traffic has been received yet. The recording application must undergo its initialization process whenever the RN is rebooted, restarted, or after restarting the RN application from the DMF Controller. If the RN is in the active state and stops receiving packets, it will not regress into the ready state; it will remain in the active state.

*sFlow® is a registered trademark of Inmon Corp.

Using the DMF Service Node Appliance

This chapter describes to configure the managed services provided by the DANZ Monitoring Fabric (DMF) Service Node Appliance.

Overview

The DANZ Monitoring Fabric (DMF) Service Node has multiple interfaces connected to traffic for processing and analysis. Each interface can be programmed independently to provide any supported managed-service actions.

To create a managed service, identify a switch interface connected to the service node, specify the service action, and configure the service action options.

Configure a DMF policy to use the managed service by name. This action causes the Controller to forward traffic the policy selects to the service node. The processed traffic is returned to the monitoring fabric using the same interface and sent to the tools (delivery interfaces) defined in the DMF policy.

If the traffic volume the policy selects is too much for a single service node interface, define an LAG on the switch connected to the service node, then use the LAG interface when defining the managed service. All service node interfaces connected to the LAG are configured to perform the same action. The traffic the policy selects is automatically load-balanced among the LAG member interfaces and distributes the return traffic similarly.

Changing the Service Node Default Configuration

Configuration settings are automatically downloaded to the service node from the DANZ Monitoring Fabric (DMF) Controller to eliminate the need for box-by-box configuration. However, the option exists to override the default configuration for a service node from the config-service-node submode for any service node.
Note: These options are available only from the CLI and are not included in the DMF GUI.
To change the CLI mode to config-service-node, enter the following command from config mode on the Active DMF controller:
controller-1(config)# service-node <service_node_alias>
controller-1(config-service-node)#

Replace service_node_alias with the alias to use for the service node. This alias is affiliated with the hardware MAC address of the service node using the mac command. The hardware MAC address configuration is mandatory for the service node to interact with the DMF Controller.

Use any of the following commands from the config-service-node submode to override the default configuration for the associated service node:
  • admin password: set the password to log in to the service node as an admin user.
  • banner: set the service node pre-login banner message.
  • description: set a brief description.
  • logging: enable service node logging to the Controller.
  • mac: configure a MAC address for the service node.
  • ntp: configure the service node to override default parameters.
  • snmp-server: configure an SNMP trap host to receive SNMP traps from the service node.

Using SNMP to Monitor DPDK Service Node Interfaces

Directly fetch the counters and status of the service node interfaces handling traffic (DPDK interfaces). The following are the supported OIDs.
interfaces MIB: ❵.1.3.6.1.2.1.2❵
ifMIBObjects MIB: ❵.1.3.6.1.2.1.31.1❵
Note: A three-digit number between 101 and 116 identifies SNI DPDK (traffic) interfaces.
In the following example, interface sni5 (105) handles data traffic. To fetch the packet count, use the following command:
snmpget -v2c -c public 10.106.6.5 .1.3.6.1.2.1.31.1.1.1.6.105
          IF-MIB::ifHCInOctets.105 = Counter64: 10008
To fetch the counters for packets exiting the service node interface, enter the following command:
snmpget -v2c -c public 10.106.6.5 .1.3.6.1.2.1.31.1.1.1.10.105
          IF-MIB::ifHCOutOctets.105 = Counter64: 42721
To fetch Link Up and Down status, enter the following command:
[root@TestTool anet]# snmpwalk -v2c -c onlonl 10.106.6.6 .1.3.6.1.2.1.2.2.1.8.109
IF-MIB::ifOperStatus.109 = INTEGER: down(2)
[root@TestTool anet]# snmpwalk -v2c -c onlonl 10.106.6.6 .1.3.6.1.2.1.2.2.1.8.105
IF-MIB::ifOperStatus.105 = INTEGER: up(1)

Configuring Managed Services

To view, edit, or create DANZ Monitoring Fabric (DMF) managed services, select the Monitoring > Managed Services option.
Figure 1. Managed Services

This page displays the service node appliance devices connected to the DMF Controller and the services configured on the Controller.

Using the GUI to Define a Managed Service

To create a new managed service, perform the following steps:

  1. Click the Provision control (+) in the Managed Services table. The system displays the Create Managed Service dialog, shown in the following figure.
    Figure 2. Create Managed Service: Info
  2. Assign a name to the managed service.
  3. (Optional) Provide a text description of the managed service.
  4. Select the switch and interface providing the service.
    The Show Managed Device Switches Only checkbox, enabled by default, limits the switch selection list to service node appliances. Enable the Show Connected Switches Only checkbox to limit the display to connected switches.
  5. Select the action from the Action selection list, which provides the following options.
    • Application ID
    • Deduplication: Deduplicate selected traffic, including NATed traffic.
    • GTP Correlation
    • Header Strip: Remove bytes of packet starting from zero till selected Anchor and offset bytes
    • Header Strip Cisco Fabric Path Header: Remove the Cisco Fabric Path encapsulation header
    • Header Strip ERSPAN Header: Remove Encapsulated Remote Switch Port Analyzer Encapsulation header
    • Header Strip Genev1 Header: Remove Generic Network Virtualization Encapsulation header
    • Header Strip L3 MPLS Header: Remove Layer 3 MPLS encapsulation header
    • Header Strip LISP Header: Remove Locator Separation Protocol Encapsulation header
    • Header Strip VXLAN Header: Remove Virtual Extensible LAN Encapsulation header
    • IPFIX: Generate IPFIX by selecting matching traffic and forwarding it to specified collectors.
    • Mask: Mask sensitive information as specified by the user in packet fields.
    • NetFlow: Generate a NetFlow by selecting matching traffic and forwarding it to specified collectors.
    • Pattern-Drop: Drop matching traffic.
    • Pattern Match: Forward matching traffic.
    • Session Slice: Slice TCP sessions.
    • Slice: Slice the given number of bytes based on the specified starting point in the packet.
    • TCP Analysis
    • Timestamp: Identify the time that the service node receives the packet.
    • UDP Replication: Copy UDP messages to multiple IP destinations, such as Syslog or NetFlow messages.
  6. (Optional) Identify the starting point for service actions.
    Identify the start point for the deduplication, mask, pattern-match, pattern-drop services, or slice services using one of the keywords listed below.
    • packet-start: add the number of bytes specified by the integer value to the first byte in the packet.
    • l3-header-start: add the number of bytes specified by the integer value to the first byte in the Layer 3 header.
    • l4-header-start: add the number of bytes specified by the integer value to the first byte in the layer-4 header.
    • l4-payload-start: add the number of bytes specified by the integer value to the first byte in the layer-4 user data.
    • integer: specify the number of bytes to offset for determining the start location for the service action relative to the specified start keyword.
  7. To assign a managed service to a policy, enable the checkbox on the Managed Services page of the Create Policy or Edit Policy dialog.
  8. Select the backup service from the Backup Service selection list to create a backup service. The backup service is used when the primary service is not available.

Using the CLI to Define a Managed Service

Note: When connecting a LAG interface to the DANZ Monitoring Fabric (DMF) service node appliance, member links should be of the same speed and can span across multiple service nodes. The maximum number of supported member links per LAG interface is 32, which varies based on the switch platform. Please refer to the hardware guide for the exact details of the supported configuration.

To configure a service to direct traffic to a DMF service node, complete the following steps:

  1. Define an identifier for the managed service by entering the following command:
    controller-1(config)# managed-service DEDUPLICATE-1
    controller-1(config-managed-srv)#

    This step enters the config-managed-srv submode to configure a DMF-managed service.

  2. (Optional) Configure a description for the current managed service by entering the following command:
    controller-1(config-managed-srv)# description “managed service for policy DEDUPLICATE-1”
    The following are the commands available from this submode:
    • description: provide a service description
    • post-service-match: select traffic after applying the header strip service
    • Action sequence number in the range [1 - 20000]: identifier of service action
    • service-interface: associate an interface with the service
  3. Use a number in the range [1 - 20000] to identify a service action for a managed service.
    The following summarizes the available service actions. See the subsequent sections for details and examples for specific service actions.
    • dedup {anchor-offset | full-packet | routed-packet}
    • header-strip {l4-header-start | l4-payload-start | packet-start }[offset]
    • decap-cisco-fp {drop}
    • decap-erspan {drop}
    • decap-geneve {drop}
    • decap-l3-mpls {drop}
    • decap-lisp {drop}
    • decap-vxlan {drop}
    • mask {mask/pattern} [{packet-start | l3-header-start | l4-header-start | l4-payload-start} mask/offset] [mask/mask-start mask/mask-end]}
    • netflow Delivery_interface Name
    • ipfix Delivery_interface Name
    • udp-replicate Delivery_interface Name
    • tcp-analysis Delivery_interface Name
    Note: The IPFIX, NetFlow, and udp-replicate service actions enable a separate submode for defining one or more specific configurations. One of these services must be the last service applied to the traffic selected by the policy.
    • pattern-drop pattern [{l3-header-start | l4-header-start | packet-start }]
    • pattern-match pattern [{l3-header-start | l4-header-start | packet-start }] |
    • slice {packet-start | l3-header-start | l4-header-start | l4-payload-start} integer}
    • timestamp
    For example, the following command enables packet deduplication on the routed packet:
    controller-1(config-managed-srv)# 1 dedup routed-packet
  4. Optionally, identify the start point for the mask, pattern-match, pattern-drop services, or slice services.
  5. Identify the service interface for the managed service by entering the following command:
    controller-1(config-managed-srv)# service-interface switch DMF-CORE-SWITCH-1 ethernet40
    Use a port channel instead of an interface to increase the bandwidth available to the managed service. The following example enables lag-interface1 for the service interface:
    controller-1(config-managed-srv)# service-interface switch DMF-CORE-SWITCH-1 lag1
  6. Apply the managed service within a policy like any other service, as shown in the following examples for deduplication, NetFlow, pattern matching (forwarding), and packet slicing services.
Note: Multiple DMF policies can use the same managed service, for example, a packet slicing managed service.

Monitoring Managed Services

To identify managed services bound to a service node interface and the health status of the respective interface, use the following commands:
controller-1# show managed-service-device <SN-Name> interfaces
controller-1# show managed-service-device <SN-Name> stats

For example, the following command shows the managed services handled by the Service Node Interface (SNI):


Note:The show managed-service-device <SN-Name> stats <Managed-service-name> command filters the statistics of a specific managed service.
The Load column shows no, low, moderate, high, and critical health indicators. These health indicators are represented by green, yellow, and red under DANZ Monitoring Fabric > Managed Services > Devices > Service Stats. They reflect the processor load on the service node interface at that instant but do not show the bandwidth of the respective data port (SNI) handling traffic, as shown in the following sample snapshot of the Service Stats output.
Figure 3. Service Node Interface Load Indicator

Deduplication Action

The DANZ Monitoring Fabric (DMF) Service Node enhances the efficiency of network monitoring tools by eliminating duplicate packets. Duplicate packets can be introduced into the out-of-band monitoring data stream by receiving the same flow from multiple TAP or SPAN ports spread across the production network. Deduplication eliminates these duplicate packets and enables more efficient use of passive monitoring tools.

The DMF Service Node provides three modes of deduplication for different types of duplicate packets.
  • Full packet deduplication: deduplicates incoming packets that are identical at the L2/L3/L4 layers.
  • Routed packet deduplication: as packets traverse an IP network, the MAC address changes from hop to hop. Routed packet deduplication enables users to match packet contents starting from the L3 header.
  • NATed packet deduplication: to perform NATed deduplication, the service node compares packets in the configured window that are identical starting from the L4 payload. To use NATed packet deduplication, perform the following fields as required:
    • Anchor: Packet Start, L2 Header Start, L3 Header Start, or L3 Payload Start fields.
    • Offset: the number of bytes from the anchor where the deduplication check begins.

The time window in which the service looks for duplicate packets is configurable. Select a value among these choices: 2ms (the default), 4ms, 6ms, and 8ms.

GUI Configuration

Figure 4. Create Managed Service > Action: Deduplication Action

CLI Configuration

Controller-1(config)# show running-config managed-service MS-DEDUP-FULL-PACKET
! managed-service
managed-service MS-DEDUP-FULL-PACKET
description 'This is a service that does Full Packet Deduplication'
1 dedup full-packet window 8
service-interface switch CORE-SWITCH-1 ethernet13/1
Controller-1(config)#
Controller-1(config)# show running-config managed-service MS-DEDUP-ROUTED-PACKET
! managed-service
managed-service MS-DEDUP-ROUTED-PACKET
description 'This is a service that does Routed Packet Deduplication'
1 dedup routed-packet window 8
service-interface switch CORE-SWITCH-1 ethernet13/2
Controller-1(config)#
Controller-1(config)# show running-config managed-service MS-DEDUP-NATTED-PACKET
! managed-service
managed-service MS-DEDUP-NATTED-PACKET
description 'This is a service that does Natted Packet Deduplication'
1 dedup anchor-offset l4-payload-start 0 window 8
service-interface switch CORE-SWITCH-1 ethernet13/3
Controller-1(config)#
Note: The existing command is augmented to show the deduplication percentage. The command syntax is show managed-service-device <Service-Node-Name> stats <dedup-service-name> dedup
Controller-1(config)# show managed-service-device DMF-SN-R740-1 stats MS-DEDUP dedup
~~~~~~~~~~~~~~~~ Stats ~~~~~~~~~~~~~~~~
Interface Name : sni16
Function : dedup
Service Name : MS-DEDUP
Rx packets : 9924950
Rx bytes : 4216466684
Rx Bit Rate : 1.40Gbps
Applied packets : 9923032
Applied bytes : 4216337540
Applied Bit Rate : 1.40Gbps
Tx packets : 9796381
Tx bytes : 4207106113
Tx Bit Rate : 1.39Gbps
Deduped frame count : 126651
Deduped percent : 1.2763336851075358
Load : low
Controller-1(config)#

Header Strip Action

This action removes specific headers from the traffic selected by the associated DANZ Monitoring Fabric (DMF) policy. Alternatively, define custom header stripping based on the starting position of the Layer-3 header, the Layer-4 header, the Layer-4 payload, or the first byte in the packet.

Use the following decap actions isolated from the header-strip configuration stanza:
  • decap-erspan: remove the Encapsulated Remote Switch Port Analyzer (ERSPAN) header.
  • decap-cisco-fabric-path: remove the Cisco FabricPath protocol header.
  • decap-l3-mpls: remove the Layer-3 Multi-protocol Label Switching (MPLS) header.
  • decap-lisp: remove the LISP header.
  • decap-vxlan [udp-port vxlan port]: remove the Virtual Extensible LAN (VXLAN) header.
  • decap-geneve: remove the Geneve header.
Note:For the Header Strip and Decap actions, apply post-service rules to select traffic after stripping the original headers.
To customize the header-strip action, use one of the following keywords to strip up to the specified location in each packet:
  • l3-header-start
  • l4-header-start
  • l4-payload-start
  • packet-start

Input a positive integer representing the offset from which the strip action begins. When omitting an offset, the header stripping starts from the first byte in the packet.

GUI Configuration

Figure 5. Create Managed Service: Header Strip Action

After assigning the required actions to the header stripping service, click Next or Post-Service Match.

The system displays the Post Service Match page, used in conjunction with the header strip service action.
Figure 6. Create Managed Service: Post Service Match for Header Strip Action

CLI Configuration

The header-strip service action strips the header and replaces it in one of the following ways:

  • Add the original L2 src-mac, and dst-mac.
  • Add the original L2 src-mac, dst-mac, and ether-type.
  • Specify and adda custom src-mac, dst-mac, and ether-type.

The following are examples of custom header stripping:

This example strips the header and replaces it with the original L2 src-mac and dst-mac.
! managed-service
managed-service MS-HEADER-STRIP-1
1 header-strip packet-start 20 add-original-l2-dstmac-srcmac
service-interface switch CORE-SWITCH-1 ethernet13/1
This example adds the original L2 src-mac, dst-mac, and ether-type.
! managed-service
managed-service MS-HEADER-STRIP-2
1 header-strip packet-start 20 add-original-l2-dstmac-srcmac-ethertype
service-interface switch CORE-SWITCH-1 ethernet13/2
This example specifies the addition ofa customized src-mac, dst-mac, and ether-type.
! managed-service
managed-service MS-HEADER-STRIP-3
1 header-strip packet-start 20 add-custom-l2-header 00:11:01:02:03:04 00:12:01:02:03:04
0x800
service-interface switch CORE-SWITCH-1 ethernet13/3

Configuring the Post-service Match

The post-service match configuration option enables matching on inner packet fields after the DANZ Monitoring Fabric (DMF) Service Node performs header stripping. This option is applied on the post-service interface after the service node completes the strip service action. Feature benefits include the following:
  • The fabric can remain in L3/L4 mode. It is not necessary to change to offset match mode.
  • Easier configuration.
  • All match conditions are available for the inner packet.
  • The policy requires only one managed service to perform the strip service action.
With this feature enabled, DMF knows exactly where to apply the post-service match. The following example illustrates this configuration.
! managed-service
managed-service MS-HEADER-STRIP-4
service-interface switch CORE-SWITCH-1 interface ethernet1
1 decap-l3-mpls
!
post-service-match
1 match ip src-ip 1.1.1.1
2 match tcp dst-ip 2.2.2.0 255.255.255.0
! policy
policy POLICY-1
filter-interface TAP-1
delivery-interface TOOL-1
use-managed-service MS-HEADER-STRIP-4 sequence 1

IPFIX and Netflow Actions

IP Flow Information Export (IP FIX), also known as NetFlow v10, is an IETF standard defined in RFC 7011. The IPFIX generator (agent) gathers and transmits information about flows, which are sets of packets that contain all the keys specified by the IPFIX template. The generator observes the packets received in each flow and forwards the information to the IPFIX collector (server) in the form as a flowset.

Starting with the DANZ Monitoring Fabric (DMF)-7.1.0 release, NetFlow v9 (Cisco proprietary) and IPFIX/NetFlow v10 are both supported. Configuration of the IPFIX managed service is similar to configuration for earlier versions of NetFlow except for the UDP port definition. NetFlow v5 collectors typically listen over UDP port 2055, while IFPIX collectors listen over UDP port 4739.

NetFlow records are typically exported using User Datagram Protocol (UDP) and collected using a flow collector. For a NetFlow service, the service node takes incoming traffic and generates NetFlow records. The service node drops the original packets, and the generated flow records, containing metadata about each flow, are forwarded out of the service node interface.

IPFIX Template

The IPXIF template consists of the key element IDs representing IP flow, field element IDs representing actions the exporter has to perform over IP flows matching key element IDs, the template ID number for uniqueness, collector information, and eviction timers.

To define a template, configure keys of interest representing the IP flow and fields that identify the values measured by the exporter, the exporter information, and the eviction timers. To define the template, select the Monitoring > Managed Service > IPFIX Template option from the DANZ Monitoring Fabric (DMF) GUI or enter the ipfix-template template-name command in config mode, replacing template-name with a unique identifier for the template instance.

IPFIX Keys

Use an IPFIX key to specify the characteristics of the traffic to monitor, such as source and destination MAC or IP address, VLAN ID, Layer-4 port number, and QoS marking. The generator includes flows in a flow set having all the attributes specified by the keys in the template applied. The flowset is updated only for packets that have all the specified attributes. If a single key is missing, the packet is ignored. To see a listing of the keys supported in the current release of the DANZ Monitoring Fabric (DMF) Service Node, select the Monitoring > Managed Service > IPFIX Template option from the DMF GUI or type help key in config-ipxif-template submode. The following are the keys supported in the current release:
  • destination-ipv4-address
  • destination-ipv6-address
  • destination-mac-address
  • destination-transport-port
  • dot1q-priority
  • dot1q-vlan-id
  • ethernet-type
  • icmp-type-code-ipv4
  • icmp-type-code-ipv6
  • ip-class-of-service
  • ip-diff-serv-code-point
  • ip-protocol-identifier
  • ip-ttl
  • ip-version
  • policy-vlan-id
  • records-per-dmf-interface
  • source-ipv4-address
  • source-ipv6-address
  • source-mac-address
  • source-transport-port
  • vlan id
Note: The policy-vlan-id and records-per-dmf-interface keys are Arista Proprietary Flow elements. The policy-vlan-id key helps to query per-policy flow information at Arista Analytics-node (Collector) in push-per-policy deployment mode. The records-per-dmf-interface key helps to identify filter interfaces tapping the traffic. The following limitations apply at the time of IPFIX template creation:
  • The Controller will not allow the key combination of source-mac-address and records-per-dmf-interface in push-per-policy mode.
  • The Controller will not allow the key combinations of policy-vlan-id and records-per-dmf-interface in push-per-filter mode.

IPFIX Fields

A field defines each value updated for the packets the generator receives that match the specified keys. For example, include fields in the template to record the number of packets, the largest and smallest packet sizes, or the start and end times of the flows. To see a listing of the fields supported in the current release of the DANZ Monitoring Fabric (DMF) Service Node, select the Monitoring > Managed Service > IPFIX Template option from the DMF GUI, or type help in config-ipxif-template submode. The following are the fields supported:

  • flow-end-milliseconds
  • flow-end-reason
  • flow-end-seconds
  • flow-start-milliseconds
  • flow-start-seconds
  • maximum-ip-total-length
  • maximum-layer2-total-length
  • maximum-ttl
  • minimum-ip-total-length
  • minimum-layer2-total-length
  • minimum-ttl
  • octet-delta-count
  • packet-delta-count
  • tcp-control-bits

Active and Inactive Timers

After the number of minutes specified by the active timer, the flow set is closed and forwarded to the IPFIX collector. The default active timer is one minute. During the number of seconds set by the inactive timer, if no packets that match the flow definition are received, the flow set is closed and forwarded without waiting for the active timer to expire. The default value for the inactive time is 15 seconds.

Example Flowset

The following is a Wireshark view of an IPFIX flowset.
Figure 7. Example IPFIX Flowset in Wireshark

The following is a running-config that shows the IPFIX template used to generate this flowset.

Example IPFIX Template

! ipfix-template
ipfix-template Perf-temp
template-id 22222
key destination-ipv4-address
key destination-transport-port
key dot1q-vlan-id
key source-ipv4-address
key source-transport-port
field flow-end-milliseconds
field flow-end-reason
field flow-start-milliseconds
field maximum-ttl
field minimum-ttl
field packet-delta-count

Using the GUI to Define an IPFIX Template

To define an IPFIX template, perform the following steps:
  1. Select the Monitoring > Managed Services option.
  2. On the DMF Managed Services page, select IPFIX Templates.
    The system displays the IPFIX Templates section.
    Figure 8. IPFIX Templates
  3. To create a new template, click the provision (+) icon in the IPFIX Templates section.
    Figure 9. Create IPFIX Template
  4. To add an IPFIX key to the template, click the Settings control in the Keys section. The system displays the following dialog.
    Figure 10. Select IPFIX Keys
  5. Enable each checkbox for the keys to add to the template and click Select.
  6. To add an IPFIX field to the template, click the Settings control in the Fields section. The system displays the following dialog:
    Figure 11. Select IPFIX Fields
  7. Enable the checkbox for each field to add to the template and click Select.
  8. On the Create IPFIX Template page, click Save.
The new template is added to the IPFIX Templates table, with each key and field listed in the appropriate column. Use this customized template to apply when defining an IPFIX-managed service.

Using the CLI to Define an IPFIX Template

  1. Create an IPFIX template.
    controller-1(config)# ipfix-template IPFIX-IP
    controller-1(config-ipfix-template)#

    This changes the CLI prompt to the config-ipfix-template submode.

  2. Define the keys to use for the current template, using the following command:

    [ no ] key { ethernet-type | source-mac-address | destination-mac-address | dot1q-vlan-id | dot1q-priority | ip-version | ip-protocol-identifier | ip-class-of-service | ip-diff-serv-code-point | ip-ttl | sourceipv4-address | destination-ipv4-address | icmp-type-code-ipv4 | source-ipv6-address | destination-ipv6-address | icmp-type-code-ipv6 | source-transport-port | destination-transport-port }

    The keys specify the attributes of the flows to be included in the flowset measurements.

  3. Define the fields to use for the current template, using the following command:
    [ no ] field { packet-delta-count | octet-delta-count | minimum-ip-total-length | maximum-ip- total-length | flow-start-seconds | flow-end-seconds | flow-end-reason | flow-start-milliseconds | flow-end-milliseconds | minimum-layer2-total-length | maximum-layer2-total- length | minimum-ttl | maximum-ttl }

    The fields specify the measurements to be included in the flowset.

Use the template when defining the IPFIX action.

Using the GUI to Define an IPFIX Service Action

Select IPFIX from the Action selection list on the Create Managed Service > Action page.

Figure 12. Selecting IPFIX Action in Create Managed Service
Enter the following required configuration details:
  • Assign a delivery interface.
  • Configure the collector IP address.
  • Identify the IPFIX template.
The following configuration is optional:
  • Inactive timeout: the interval of inactivity that marks a flow inactive.
  • Active timeout: length of time between each IPFIX flows for a specific flow.
  • Source IP: source address to use for the IPFIX flowsets.
  • UDP port: UDP port to use for sending IPFIX flowsets.
  • MTU: MTU to use for sending IPFIX flowsets.

After completing the configuration, click Next, and then click Save.

Using the CLI to Define an IPFIX Service Action

Define a managed service and define the IPFIX action.
controller(config)# managed-service MS-IPFIX-SERVICE
controller(config-managed-srv)# 1 ipfix TO-DELIVERY-INTERFACE
controller(config-managed-srv-ipfix)# collector 10.106.1.60
controller(config-managed-srv-ipfix)# template IPFIX-TEMPLATE

The active-timeout and inactive-timeout commands are optional

To view the running-config for a managed service using the IPFIX action, enter the following command:
controller1# show running-config managed-service MS-IPFIX-ACTIVE
! managed-service
managed-service MS-IPFIX-ACTIVE
service-interface switch CORE-SWITCH-1 ethernet13/1
!
1 ipfix TO-DELIVERY-INTERFACE
collector 10.106.1.60
template IPFIX-TEMPLATE
To view the IPFIX templates, enter the following command:
config# show running-config ipfix-template
! ipfix-template
ipfix-template IPFIX-IP
template-id 1974
key destination-ipv4-address
key destination-ipv6-address
key ethernet-type
key source-ipv4-address
key source-ipv6-address
field flow-end-milliseconds
field flow-end-reason
field flow-start-milliseconds
field minimum-ttl
field tcp-control-bits
------------------------output truncated------------------------

Records Per Interface Netflow using DST-MAC Rewrite

Destination MAC rewrite for the records-per-interface NetFlow and IPFIX feature is the default setting and applies to switches running Extensible Operating System (EOS) and SWL and is supported on all platforms.

A configuration option exists for using src-mac when overwriting the dst-mac isn't preferred.

Configurations using the CLI

Global Configuration

The global configuration is a central place to choose which rewrite option to use for the records-per-interface. The following example illustrates using rewrite-src-mac or rewrite-dst-mac in conjunction with the filter-mac-rewrite command.
c1(config)# filter-mac-rewrite rewrite-src-mac
c1(config)# filter-mac-rewrite rewrite-dst-mac

Netflow Configuration

The following example illustrates a NetFlow configuration.
c1(config)# managed-service ms1
c1(config-managed-srv)# 1 netflow
c1(config-managed-srv-netflow)# collector 213.1.1.20 udp-port 2055 mtu 1024 records-per-interface

IPFIX Configuration

The following example illustrates an IPFIX configuration.
c1(config)# ipfix-template i1
c1(config-ipfix-template)# field maximum-ttl 
c1(config-ipfix-template)# key records-per-dmf-interface
c1(config-ipfix-template)# template-id 300

c1(config)# managed-service ms1
c1(config-managed-srv)# 1 ipfix
c1(config-managed-srv-ipfix)# template i1

Show Commands

NetFlow Show Commands

Use the show running-config managed-service command to view the NetFlow settings.
c1(config)# show running-config managed-service 
! managed-service
managed-service ms1
!
1 netflow
collector 213.1.1.20 udp-port 2055 mtu 1024 records-per-interface

IPFIX Show Commands

Use the show ipfix-template i1 command to view the IPFIX settings.
c1(config)# show ipfix-template i1
~~~~~~~~~~~~~~~~~~ Ipfix-templates~~~~~~~~~~~~~~~~~~
# Template Name KeysFields
-|-------------|-------------------------|-----------|
1 i1records-per-dmf-interface maximum-ttl

c1(config)# show running-config managed-service 
! managed-service
managed-service ms1
!
1 ipfix
template i1

Limitations

  • The filter-mac-rewrite rewrite-src-mac command cannot be used on the filter interface that is part of the policy using timestamping replace-src-mac. However, the command works when using a timestamping add-header-after-l2 configuration.

Packet-masking Action

The packet-masking action can hide specific characters in a packet, such as a password or credit card number, based on offsets from different anchors and by matching characters using regular (regex) expressions.

The mask service action applies the specified mask to the matched packet region.

GUI Configuration

Figure 13. Create Managed Service: Packet Masking

CLI Configuration

Controller-1(config)# show running-config managed-service MS-PACKET-MASK
! managed-service
managed-service MS-PACKET-MASK
description "This service masks pattern matching an email address in payload with X"
1 mask ([a-zA-Z0-9._-]+@[a-zA-Z0-9._-]+.[a-zA-Z0-9_-]+)
service-interface switch CORE-SWITCH-1 ethernet13/1

Arista Analytics Node Capability

Arista Analytics Node capabilities are enhanced to handle NetFlow V5/V9 and IPFIX Packets. All these flow data are represented with the Netflow index.

Note: NetFlow flow record generation is enhanced for selecting VXLAN traffic. For VXLAN traffic, flow processing is based on inner headers, with the VNI as part of the key for flow lookup because IP addresses can overlap between VNIs.
Figure 14. NetFlow Managed Service

NetFlow records are exported using User Datagram Protocol (UDP) to one or more specified NetFlow collectors. Use the DMF Service Node to configure the NetFlow collector IP address and the destination UDP port. The default UDP port is 2055.

Note: No other service action, except the UDP replication service, can be applied after a NetFlow service action because part of the NetFlow action is to drop the packets.

Configuring the Arista Analytics Node Using the GUI

From the Arista Analytics Node dashboard, apply filter rules to display specific flow information.

The following are the options available on this page:
  • Delivery interface: interface to use for delivering NetFlow records to collectors.
    Note: The next-hop address must be resolved for the service to be active.
  • Collector IP: identify the NetFlow collector IP address.
  • Inactive timeout: use the inactive-timeout command to configure the interval of inactivity before NetFlow times out. The default is 15 seconds.
  • Source IP: specify a source IP address to use as the source of the NetFlow packets.
  • Active timeout: use active timeout to configure a period that a NetFlow can be generated continuously before it is automatically terminated. The default is one minute.
  • UDP port: change the UDP port number used for the NetFlow packets. The default is 2055.
  • Flows: specify the maximum number of NetFlow packets allowed. The allowed range is 32768 to 1048576. The default is 262144.
  • Per-interface records: identify the filter interface where the NetFlow packets were originally received. This information can be used to identify the hop-by-hop path from the filter interface to the NetFlow collector.
  • MTU: change the Maximum Transmission Unit (MTU) used for NetFlow packets.
Figure 15. Create Managed Service: NetFlow Action

Configuring the Arista Analytics Node Using the CLI

Use the show managed-services command to display the ARP resolution status.
Note: The DANZ Monitoring Fabric (DMF) Controller resolves ARP messages for each NetFlow collector IP address on the delivery interface that matches the defined subnet. The subnets defined on the delivery interfaces cannot overlap and must be unique for each delivery interface.

Enter the 1 netflow command and identify the configuration name and the submode changes to the config-managed-srv-netflow mode for viewing and configuring a specific NetFlow configuration.

The DMF Service Node replicates NetFlow packets received without changing the source IP address. Packets that do not match the specified destination IP address and packets that are not IPv4 or UDP are passed through. To configure a NetFlow-managed service, perform the following steps:

  1. Configure the IP address on the delivery interface.
    This IP address is the next-hop IP address from the DANZ Monitoring Fabric towards the NetFlow collector.
    CONTROLLER-1(config)# switch DMF-DELIVERY-SWITCH-1
    CONTROLLER-1(config-switch)# interface ethernet1
    CONTROLLER-1(config-switch-if)# role delivery interface-name NETFLOW-DELIVERY-PORT ip-address 172.43.75.1 nexthop-ip 172.43.75.2 255.255.255.252
  2. Configure the rate-limit for the NetFlow delivery interface.
    CONTROLLER-1(config)# switch DMF-DELIVERY-SWITCH-1
    CONTROLLER-1(config-switch)# interface ethernet1
    CONTROLLER-1(config-switch-if)# role delivery interface-name NETFLOW-DELIVERY-PORT ip-address 172.43.75.1 nexthop-ip 172.43.75.2 255.255.255.252
    CONTROLLER-1(config-switch-if)# rate-limit 256000
    Note: The rate limit must be configured when enabling Netflow. When upgrading from a version of DMF before release 6.3.1, the Netflow configuration is not applied until a rate limit is applied to the delivery interface.
  3. Configure the NetFlow managed service using the 1 netflow command followed by an identifier for the specific NetFlow configuration.
    
    CONTROLLER-1(config)# managed-service MS-NETFLOW-SERVICE CONTROLLER-1
    (config-managed-srv)# 1 netflow NETFLOW-DELIVERY-PORT CONTROLLER-1
    (config-managed-srv-netflow)#
    The following commands are available in this submode:
    • active-timeout: configure the maximum length of time the NetFlow is transmitted before it is ended (in minutes).
    • collector: configure the collector IP address, and change the UDP port number or the MTU.
    • inactive-timeout: configure the length of time that the NetFlow is inactive before it is ended (in seconds).
    • max-flows: configure the maximum number of flows managed.

    An option exists to limit the number of flows or change the inactivity timeout using the max-flows or active timeout, or inactive timeout commands.

  4. Configure the NetFlow collector IP address using the following command:
    collector <ip4-address>[udp-port<integer>][mtu <integer>][records-per-interface]
    

    The IP address, in IPV4 dotted-decimal notation, is required. The MTU and UDP port are required when changing these parameters from the defaults. Enable the records-per-interface option to allow identification of the filter interfaces from which the Netflow originated. Configure the Arista Analytics Node to display this information, as described in the DMF User Guide.

    The following example illustrates changing the Netflow UDPF port to 9991.
    collector 10.181.19.31 udp-port 9991
    Note: The IP address must be in the same subnet as the configured next hop and unique. It cannot be the same as the Controller, service node, or any monitoring fabric switch IP address.
  5. Configure the DMF policy with the forward action and add the managed service to the policy.
    Note: A DMF policy does not require any configuration related to a delivery interface for NetFlow policies because the DMF Controller automatically assigns the delivery interface.
    The example below shows the configuration required to implement two NetFlow service instances (MS-NETFLOW-1 and MS-NETFLOW-1).
    ! switch
    switch DMF-DELIVERY-SWITCH-1
    !
    interface ethernet1
    role delivery interface-name NETFLOW-DELIVERY-PORT-1 ip-address 10.3.1.1
    nexthop-ip 10.3.1.2 255.255.255.0
    interface ethernet2
    role delivery interface-name NETFLOW-DELIVERY-PORT-2 ip-address 10.3.2.1
    nexthop-ip 10.3.2.2 255.255.255.0
    ! managed-service
    managed-service MS-NETFLOW-1
    service-interface switch DMF-CORE-SWITCH-1 interface ethernet11/1
    !
    1 netflow NETFLOW-DELIVERY-PORT-1
    collector-ip 10.106.1.60 udp-port 2055 mtu 1024
    managed-service MS-NETFLOW-2
    service-interface switch DMF-CORE-SWITCH-2 interface ethernet12/1
    !
    1 netflow NETFLOW-DELIVERY-PORT-1
    collector-ip 10.106.2.60 udp-port 2055 mtu 1024
    ! policy
    policy GENERATE-NETFLOW-1
    action forward
    filter-interface TAP-INTF-DC1-1
    filter-interface TAP-INTF-DC1-2
    use-managed-service MS-NETFLOW-1 sequence 1
    1 match any
    policy GENERATE-NETFLOW-2
    action forward
    filter-interface TAP-INTF-DC2-1
    filter-interface TAP-INTF-DC2-2
    use-managed-service MS-NETFLOW-2 sequence 1
    1 match any

Pattern-drop Action

The pattern-drop service action drops matching traffic.

Pattern matching allows content-based filtering beyond Layer-2, Layer-3, or Layer-4 Headers. This functionality allows filtering on the following packet fields and values:
  • URLs and user agents in the HTTP header
  • patterns in BitTorrent packets
  • encapsulation headers for specific parameters, including GTP, VXLAN, and VN-Tag
  • subscriber device IP (user-endpoint IP)

Pattern matching allows Session-aware Adaptive Packet Filtering (SAPF) to identify HTTPS transactions on non-standard SSL ports. It can filter custom applications and separate control traffic from user data traffic.

Pattern matching is also helpful in enforcing IT policies, such as identifying hosts using unsupported operating systems or dropping unsupported traffic. For example, the Windows OS version can be identified and filtered based on the user-agent field in the HTTP header. The user-agent field may appear at variable offsets, so a regular expression search is used to identify the specified value wherever it occurs in the packet.

GUI Configuration

Figure 16. Create Managed Service: Pattern Drop Action

CLI Configuration

Controller-1(config)# show running-config managed-service MS-PACKET-MASK
! managed-service
managed-service MS-PACKET-MASK
description "This service drops traffic that has an email address in its payload"
1 pattern-drop ([a-zA-Z0-9._-]+@[a-zA-Z0-9._-]+.[a-zA-Z0-9_-]+)
service-interface switch CORE-SWITCH-1 ethernet13/1

Pattern-match Action

The pattern-match service action matches and forwards matching traffic and is similar to the pattern-drop service action.

Pattern matching allows content-based filtering beyond Layer-2, Layer-3, or Layer-4 Headers. This functionality allows filtering on the following packet fields and values:
  • URLs and user agents in the HTTP header
  • patterns in BitTorrent packets
  • encapsulation headers for specific parameters including, GTP, VXLAN, and VN-Tag
  • subscriber device IP (user-endpoint IP)
  • Pattern matching allows Session Aware Adaptive Packet Filtering and can identify HTTPS transactions on non-standard SSL ports. It can filter custom applications and can separate control traffic from user data traffic.

Pattern matching allows Session-aware Adaptive Packet Filtering (SAPF) to identify HTTPS transactions on non-standard SSL ports. It can filter custom applications and separate control traffic from user data traffic.

Pattern matching is also helpful in enforcing IT policies, such as identifying hosts using unsupported operating systems or dropping unsupported traffic. For example, the Windows OS version can be identified and filtered based on the user-agent field in the HTTP header. The user-agent field may appear at variable offsets, so a regular expression search is used to identify the specified value wherever it occurs in the packet.

GUI Configuration

Figure 17. Create Managed Service: Pattern Match Action

CLI Configuration

Use the pattern-match pattern keyword to enable the pattern-matching service action. Specify the pattern to match for packets to submit to the packet slicing operation.

The following example matches traffic with the string Windows NT 5.(0-1) anywhere in the packet and delivers the packets to the delivery interface TOOL-PORT-TO-WIRESHARK-1. This service is optional and is applied to TCP traffic to destination port 80.
! managed-service
managed-service MS-PATTERN-MATCH
description 'regular expression filtering'
1 pattern-match 'Windows\\sNT\\s5\\.[0-1]'
service-interface switch CORE-SWITCH-1 ethernet13/1
! policy
policy PATTERN-MATCH
action forward
delivery-interface TOOL-PORT-TO-WIRESHARK-1
description 'match regular expression pattern'
filter-interface TAP-INTF-FROM-PRODUCTION
priority 100
use-managed-service MS-PATTERN-MATCH sequence 1 optional
1 match tcp dst-port 80

Slice Action

The slice service action slices the given number of packets based on the specified starting point in the packet. Packet slicing reduces packet size to increase processing and monitoring throughput. Passive monitoring tools process fewer bits while maintaining each packet's vital, relevant portions. Packet slicing can significantly increase the capacity of forensic recording tools. Apply packet slicing by specifying the number of bytes to forward based on an offset from the following locations in the packet:
  • Packet start
  • L3 header start
  • L4 header start
  • L4 payload start

GUI Configuration

Figure 18. Create Managed Service: Slice Action

This page allows inserting an additional header containing the original header length.

CLI Configuration

Use the slice keyword to enable the packet slicing service action and insert an additional header containing the original header length, as shown in the following example:
! managed-service
managed-service my-service-name
1 slice l3-header-start 20 insert-original-packet-length
service-interface switch DMF-CORE-SWITCH-1 ethernet20/1
The following example truncates the packet from the first byte of the Layer-4 payload, preserving just the original Ethernet header. The service is optional and is applied to all TCP traffic from port 80 with the destination IP address 10.2.19.119
! managed-service
managed-service MS-SLICE-1
description 'slicing service'
1 slice l4-payload-start 1
service-interface switch DMF-CORE-SWITCH-1 ethernet40/1
! policy
policy slicing-policy
action forward
delivery-interface TOOL-PORT-TO-WIRESHARK-1
description 'remove payload'
filter-interface TAP-INTF-FROM-PRODUCTION
priority 100
use-managed-service MS-SLICE-1 sequence 1 optional
1 match tcp dst-ip 10.2.19.119 255.255.255.255 src-port 80

Packet Slicing on the 7280 Switch

This feature removes unwanted or unneeded bytes from a packet at a configurable byte position (offset). This approach is beneficial when the data of interest is situated within the headers or early in the packet payload. This action reduces the volume of the monitoring stream, particularly in cases where payload data is not necessary.

Another use case for packet slicing (slice action) can be removing payload data to ensure compliance with the captured traffic.

Within the DANZ Monitoring Fabric (DMF) fabric, two types of slice-managed services (packet slicing service) now exist. These types are distinguished based on whether installing the service on a service node or on an interface of a supported switch. The scope of this document is limited to the slice-managed service configured on a switch. The managed service interface is the switch interface used to configure this service.

All DMF 8.4 and above compatible 7280 switches support this feature. Use the show switch all property command to check which switch in DMF fabric supports this feature. The feature is supported if the Min Truncate Offset and Max Truncate Offset properties have a non-zero value.

# show switch all property
# Switch Min Truncate Offset...Max Truncate Offset
-|------|-------------------| ... |---------------------------------
1 7280 100...9236
2 core1 ... 
Note: The CLI output example above is truncated for illustrative purposes. The actual output will differ.

Using the CLI to Configure Packet Slicing - 7280 Switch

Configure a slice-managed service on a switch using the following steps.
  1. Create a managed service using the managed-service service name command.
  2. Add slice action with packet-start anchor and an offset value between the supported range as reported by the show switch all property command.
  3. Configure the service interface under the config-managed-srv submode using the service-interface switch switch-name interface-name command as shown in the following example.
    > enable
    # config
    (config)# managed-service slice-action-7280-J2-J2C
    (config-managed-srv)# 1 slice packet-start 101
    (config-managed-srv)# service-interface switch 7280-J2-J2C Ethernet10/1
This feature requires the service interface to be in MAC loopback mode.
  1. To set the service interface in MAC loopback mode, navigate to the config-switch-if submode and configure using the loopback-mode mac command, as shown in the following example.
    (config)# switch 7280-J2-J2C
    (config-switch)# interface Ethernet10/1
    (config-switch-if)# loopback-mode mac
Once a managed service for slice action exists, any policy can use it.
  1. Enter the config-policy submode, and chain the managed service using the use-managed-service service same sequence sequence command.
    (config)# policy timestamping-policy
    (config-policy)# use-managed-service slice-action-7280-J2-J2C sequence 1

Key points to consider while configuring the slice action on a supported switch:

  1. Only the packet-start anchor is supported.
  2. Ensure the offset is within the Min/Max truncate size bounds reported by the show switch all property command. If the configured value is beyond the bound, then DMF chooses the closest value of the range.

    For example, if a user configures the offset as 64, and the min truncate offset reported by switch properties is 100, then the offset used is 100. If the configured offset is 10,000 and the max truncate offset reported by the switch properties is 9236, then the offset used is 9236.

  3. A configured offset for slice-managed service includes FCS when programmed on a switch interface, which means an offset of 100 will result in a packet size of 96 bytes (accounting for 4-byte FCS).
  4. Configuring an offset below 17 is not allowed.
  5. The same service interface cannot chain multiple managed services.
  6. The insert-original-packet-length option is not applicable for switch-based slice-managed service.

CLI Show Commands

Use the show policy policy name command to see the runtime state of a policy using the slice-managed service. The command shows the service interface information and stats.

Controller# show policy packet-slicing-policy
Policy Name: packet-slicing-policy
Config Status: active - forward
Runtime Status : installed
Detailed Status: installed - installed to forward
Priority : 100
Overlap Priority : 0
# of switches with filter interfaces : 1
# of switches with delivery interfaces : 1
# of switches with service interfaces: 1
# of filter interfaces : 1
# of delivery interfaces : 1
# of core interfaces : 0
# of services: 1
# of pre service interfaces: 1
# of post service interfaces : 1
Push VLAN: 1
Post Match Filter Traffic: -
Total Delivery Rate: -
Total Pre Service Rate : -
Total Post Service Rate: -
Overlapping Policies : none
Component Policies : none
Runtime Service Names: packet-slicing-7280
Installed Time : 2023-08-09 19:00:40 UTC
Installed Duration : 1 hour, 17 minutes
~ Match Rules ~
# Rule
-|-----------|
1 1 match any

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Filter Interface(s) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# DMF IF Switch IF Name State Dir Packets Bytes Pkt Rate Bit Rate Counter Reset Time
-|------|------|-----------|-----|---|-------|-----|--------|--------|------------------------------|
1 f1 7280 Ethernet2/1 uprx0 0 0-2023-08-09 19:00:40.305000 UTC

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Delivery Interface(s) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# DMF IF Switch IF Name State Dir Packets Bytes Pkt Rate Bit Rate Counter Reset Time
-|------|------|-----------|-----|---|-------|-----|--------|--------|------------------------------|
1 d1 7280 Ethernet3/1 uptx0 0 0-2023-08-09 19:00:40.306000 UTC

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Service Interface(s) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Service nameRole Switch IF NameState Dir Packets Bytes Pkt Rate Bit Rate Counter Reset Time
-|-------------------|----|------|------------|-----|---|-------|-----|--------|--------|------------------------------|
1 packet-slicing-7280 pre7280 Ethernet10/1 uptx0 0 0-2023-08-09 19:00:40.305000 UTC
2 packet-slicing-7280 post 7280 Ethernet10/1 uprx0 0 0-2023-08-09 19:00:40.306000 UTC

~ Core Interface(s) ~
None.

~ Failed Path(s) ~
None.

Use the show managed-services command to view the status of all the managed services, including the packet-slicing managed service on a switch.

Controller# show managed-services
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Managed-services ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Service NameSwitch Switch Interface Installed Max Post-Service BW Max Pre-Service BW Total Post-Service BW Total Pre-Service BW
-|-------------------|------|----------------|---------|-------------------|------------------|---------------------|--------------------|
1 packet-slicing-7280 7280 Ethernet10/1 True400Gbps 400Gbps80bps 80bps

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Actions of Service Names ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Service NameSequence Service Action Slice Anchor Insert original packet length Slice Offset
-|-------------------|--------|--------------|------------|-----------------------------|------------|
1 packet-slicing-7280 1slicepacket-start False 101

Using the GUI to Configure Packet Slicing - 7820 Switch

Perform the following steps to configure or edit a managed service.

Managed Service Configuration

  1. To configure or edit a managed service, navigate to the DMF Managed Services page from the Monitoring menu and click Managed Services.
    Figure 19. DANZ Monitoring Fabric (DMF) Managed Services
    Figure 20. DMF Managed Services Add Managed Service
  2. Configure a managed service interface on a switch that supports packet slicing. Make sure to deselect the Show Managed Device Switches Only checkbox.
    Figure 21. Create Managed Service
  3. Configure a new managed service action using Add Managed service action. The action chain supports only one action when configuring packet slicing on a switch.
    Figure 22. Add Managed service action
  4. Use Action > Slice with Anchor > Packet Start to configure the packet slicing managed service on a switch.
    Figure 23. Configure Managed Service Action
  5. Click Append to continue. The slice action appears on the Managed Services page.
    Figure 24. Slice Action Added
Interface Loopback Configuration

The managed service interface used for slice action must be in MAC loopback mode.

  1. Configure the loopback mode in the Fabric > Interfaces page by clicking on the configuration icon of the interface.
    Figure 25. Interfaces
    Note: The image above has been edited for documentation purposes. The actual output will differ.
  2. Enable the toggle for MAC Loopback Mode (set the toggle to Yes).
    Figure 26. Edit Interface
  3. After all configuration changes are done Save the changes.
Policy Configuration
  1. Create a new policy from the DMF Policies page.
    Figure 27. DMF Policies Page
  2. Add the previously configured packet slicing managed service.
    Figure 28. Create Policy
  3. Select Add Service under the + Add Service(s) option shown above.
    Figure 29. Add Service
    Figure 30. Service Type - Service - slice action
  4. Click Add 1 Service and the slice-managed service (packet-slicing-policy) appears in the Create Policy page.
    Figure 31. Manage Service Added
  5. Click Create Policy and the new policy appears in DMF Policies.
    Figure 32. DMF Policy Configured
    Note: The images above have been edited for documentation purposes. The actual outputs may differ.

Troubleshooting Packet Slicing

The show switch all property command provides upper and lower bounds of packet slicing action’s offset. If bounds are present, the feature is supported; otherwise, the switch does not support the packet slicing feature.

The show fabric errors managed-service-error command provides information when DANZ Monitoring Fabric (DMF) fails to install a configured packet slicing managed service on a switch.

The following are some of the failure cases:
  1. The managed service interface is down.
  2. More than one action is configured on a managed service interface of the switch.
  3. The managed service interface on a switch is neither a physical interface nor a LAG port.
  4. A non-slice managed service is configured on a managed service interface of a switch.
  5. The switch does not support packet slicing managed service, and its interface is configured with slice action.
  6. Slice action configured on a switch interface is not using a packet-start anchor.
  7. The managed service interface is not in MAC loopback mode.

Use the following commands to troubleshoot packet-slicing issues.

Controller# show fabric errors managed-service-error
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Managed Service related error~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Error Service Name
-|---------------------------------------------------------------------------------------------------------------------------------------------|-------------------|
1 Pre-service interface 7280-Ethernet10/1-to-managed-service on switch 7280 is inactive; Service interface Ethernet10/1 on switch 7280 is downpacket-slicing-7280
2 Post-service interface 7280-Ethernet10/1-to-managed-service on switch 7280 is inactive; Service interface Ethernet10/1 on switch 7280 is down packet-slicing-7280

The show switch switch name interface interface name dmf-stats command provides Rx and Tx rate information for the managed service interface.

Controller# show switch 7280 interface Ethernet10/1 dmf-stats
# Switch DPID Name State Rx Rate Pkt Rate Peak Rate Peak Pkt Rate TX Rate Pkt Rate Peak Rate Peak Pkt Rate Pkt Drop Rate
-|-----------|------------|-----|-------|--------|---------|-------------|-------|--------|---------|-------------|-------------|
1 7280Ethernet10/1 down- 0128bps0 - 0128bps0 0

The show switch switch name interface interface name stats command provides Rx and Tx counter information for the managed service interface.

Controller# show switch 7280 interface Ethernet10/1 stats
# Name Rx Pkts Rx Bytes Rx Drop Tx Pkts Tx Bytes Tx Drop
-|------------|-------|--------|-------|-------|--------|-------|
1 Ethernet10/1 22843477 0 5140845937 0

Considerations

  1. Managed service action chaining is not supported when using a switch interface as a managed service interface.
  2. When configured for a supported switch, the managed service interface for slice action can only be a physical interface or a LAG.
  3. When using packet slicing managed service, packets ingressing on the managed service interface are not counted in the ingress interface counters, affecting the output of the show switch switch name interface interface name stats and show switch switch name interface interface name dmf-stats commands. This issue does not impact byte counters; all byte counters will show the original packet size, not the truncated size.

VXLAN Stripping on the 7280R3 Switch

Virtual Extensible LAN Header Stripping

Virtual Extensible LAN (VXLAN) Header Stripping supports the delivery of decapsulated packets to tools and devices in a DANZ Monitoring Fabric (DMF) fabric. This feature removes the VXLAN header, previously established in a tunnel for reaching the TAP Aggregation switch or inherent to the tapped traffic within the DMF. Within the fabric, DMF supports the installation of the strip VXLAN service on a filter interface or a filter-and-delivery interface of a supported switch.

Platform Compatibility

For DMF deployments, the target platform is DCS-7280R3.

Use the show switch all property command to verify which switch in the DMF fabric supports this feature.

The feature is supported if the Strip Header Supported property has the value BSN_STRIP_HEADER_CAPS_VXLAN.
Note: The following example is displayed differently for documentation purposes than what appears when using the CLI.
# show switch all property
#: 1
Switch : lyd599
Max Phys Port: 1000000
Min Lag Port : 1000001
Max Lag Port : 1000256
Min Tunnel Port: 15000001
Max Tunnel Port: 15001024
Max Lag Comps: 64
Tunnel Supported : BSN_TUNNEL_L2GRE
UDF Supported: BSN_UDF_6X2_BYTES
Enhanced Hash Supported: BSN_ENHANCED_HASH_L2GRE,BSN_ENHANCED_HASH_L3,BSN_ENHANCED_HASH_L2,
 BSN_ENHANCED_HASH_MPLS,BSN_ENHANCED_HASH_SYMMETRIC
Strip Header Supported : BSN_STRIP_HEADER_CAPS_VXLAN
Min Rate Limit : 1Mbps
Max Multicast Replication Groups : 0
Max Multicast Replication Entries: 0
PTP Timestamp Supported Capabilities : ptp-timestamp-cap-replace-smac, ptp-timestamp-cap-header-64bit,
 ptp-timestamp-cap-header-48bit, ptp-timestamp-cap-flow-based,
 ptp-timestamp-cap-add-header-after-l2
Min Truncate Offset: 100
Max Truncate Offset: 9236

Using the CLI to Configure VXLAN Header Stripping

Configuration

Use the following steps to configure strip-vxlan on a switch:

  1. Set the optional field strip-vxlan-udp-port at switch configuration, and the default udp-port for strip-vxlan is 4789.
  2. Enable or disable strip-vxlan on a filter or both-filter-and-delivery interface using the role both-filter-and-delivery interface-name filter-interface strip-vxlan command.
    > enable
    # config
    (config)# switch switch-name
    (config-switch)# strip-vxlan-udp-port udp-port-number
    (config-switch)# interface interface-name
    (config-switch-if)# role both-filter-and-delivery interface-name filter-interface strip-vxlan
    (config-switch-if)# role both-filter-and-delivery interface-name filter-interface no-strip-vxlan
    (config)# show running-config
  3. After enabling a filter interface with strip-vxlan, any policy can use it. From the config-policy submode, add the filter-interface to the policy:
    (config)# policy p1
    (config-policy)# filter-interface filter-interface

Show Commands

Use the show policy policy name command to see the runtime state of a policy using a filter interface with strip-vxlan configured. It will also show the service interface information and stats.
# show policy strip-vxlan 
Policy Name: strip-vxlan
Config Status: active - forward
Runtime Status : installed
Detailed Status: installed - installed to forward
Priority : 100
Overlap Priority : 0
# of switches with filter interfaces : 1
# of switches with delivery interfaces : 1
# of switches with service interfaces: 0
# of filter interfaces : 1
# of delivery interfaces : 1
# of core interfaces : 0
# of services: 0
# of pre service interfaces: 0
# of post service interfaces : 0
Push VLAN: 1
Post Match Filter Traffic: -
Total Delivery Rate: -
Total Pre Service Rate : -
Total Post Service Rate: -
Overlapping Policies : none
Component Policies : none
Installed Time : 2024-05-02 19:54:27 UTC
Installed Duration : 1 minute, 18 secs
Timestamping enabled : False
~ Match Rules ~
# Rule
-|-----------|
1 1 match any
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Filter Interface(s) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# DMF IF Switch IF Name State Dir Packets Bytes Pkt Rate Bit Rate Counter Reset Time 
-|------|------|-----------|-----|---|-------|-----|--------|--------|------------------------------|
1 f1 lyd598 Ethernet1/1 uprx0 0 0-2024-05-02 19:54:27.141000 UTC
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Delivery Interface(s) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# DMF IF Switch IF Name State Dir Packets Bytes Pkt Rate Bit Rate Counter Reset Time 
-|------|------|-----------|-----|---|-------|-----|--------|--------|------------------------------|
1 d1 lyd598 Ethernet2/1 uptx0 0 0-2024-05-02 19:54:27.141000 UTC
~ Service Interface(s) ~
None.
~ Core Interface(s) ~
None.
~ Failed Path(s) ~
None.

Using the GUI to Configure VXLAN Header Stripping

Filter Interface Configuration

To configure or edit a filter interface, proceed to Interfaces from the Monitoring menu and select Interfaces > Filter Interfaces.

Figure 33. Filter Interfaces
Figure 34. DMF Interfaces

Configure a filter interface on a switch that supports strip-vxlan.

Figure 35. Configure Filter Interface

Enable or Disable Strip VXLAN.

Figure 36. Enable Strip VXLAN
Figure 37. DMF Interfaces Updated

Policy Configuration

Create a new policy using DMF Policies and add the filter interface with strip VXLAN enabled.

Figure 38. Create Policy
Figure 39. Strip VXLAN Header

Select Add port(s) under the Traffic Sources option and add the Filter Interface.

Figure 40. Selected Traffic Sources

Add another delivery interface and create the policy.

Figure 41. Policy Created

Syslog Messages

There are no syslog messages associated with this feature.

Troubleshooting

The show switch all property command provides the Strip Header Supported property of the switch. If the value BSN_STRIP_HEADER_CAPS_VXLAN is present, the feature is supported; otherwise, the switch does not support this feature.

The show fabric warnings feature-unsupported-on-device command provides information when DMF fails to enable strip-vxlan on an unsupported switch.

The show switch switch-name table strip-vxlan-header command provides the gentable details.

The following are examples of several failure cases:

  1. The filter interface is down.
  2. The interface with strip-vxlan is neither a filter interface nor a filter-and-delivery interface.
  3. The switch does not support strip-vxlan.
  4. Tunneling / UDF is enabled simultaneously with strip-vxlan.
  5. Unsupported pipeline mode with strip-vxlan enabled (strip-vxlan requires a specific pipeline mode strip-vxlan-match-push-vlan).

Limitations

  • When configured for a supported switch, the filter interface for decap-vxlan action can only be a physical interface or a LAG.
  • It is not possible to enable strip-vxlan simultaneously with tunneling / UDF.
  • When enabling strip-vxlan on one or more switch interfaces on the same switch, other filter interfaces on the same switch cannot be matched on the VXLAN header.

Session Slicing for TCP and UDP Sessions

Session-slice keeps track of TCP and UDP sessions (distinguished by source and destination IP address and port) and counts the number of packets sent in each direction (client-to-server and vice versa). After recognizing the session, the action transmits a user-configured number of packets to the tool node.

For TCP packets, session-slice tracks the number of packets sent in each direction after establishing the TCP handshake. Slicing begins after the packet count in a direction has reached the configured threshold in both directions.

For UDP packets, slicing begins after reaching the configured threshold in either direction.

By default, session-slice will operate on both TCP and UDP sessions but is configurable to operate on only one or the other.

Note: The count of packets in one direction may exceed the user-configured threshold because fewer packets have arrived in the other direction. Counts in both directions must be greater than or equal to the threshold before dropping packets.

Refer to the DANZ Monitoring Fabric (DMF) Verified Scale Guide for session-slicing performance numbers.

Configure session-slice in managed services through the Controller as a Service Node action.

Using the CLI to Configure Session Slicing

Configure session-slice in managed services through the Controller as a Service Node action.

Configuration Steps

  1. Create a managed service and enter the service interface.
  2. Choose the session-slice service action with the command: <seq num> session-slice
    Note: The <seq num> session-slice command opens the session-slice submode, which supports two configuration parameters: slice-after and idle-timeout.
  3. Use slice-after to configure the packet threshold, after which the Service Node will stop forwarding packets to tool nodes.
  4. Use idle-timeout to configure the timeout in milliseconds before an idle connection is removed from the cache. idle-timeout is an optional command with a default value of 60000 ms.
    dmf-controller-1(config)# managed-service managed_service_1
    dmf-controller-1(config-managed-srv)# 1 session-slice
    dmf-controller-1(config-managed-srv-ssn-slice)# slice-after 1000
    dmf-controller-1(config-managed-srv-ssn-slice)# idle-timeout 60000
Show Commands

The following show commands provide helpful information.

The show running-config managed-service managed service command helps verify whether the session-slice configuration is complete.
dmf-controller-1(config)# show running-config managed-service managed_service_1 

! managed-service
managed-service managed_service_1
!
1 session-slice
slice-after 1000
idle-timeout 60000
The show managed-services managed service command provides status information about the service.
dmf-controller-1(config)# show managed-services managed_service_1
# Service NameSwitchSwitch Interface Installed Max Post-Service BW Max Pre-Service BW Total Post-Service BW Total Pre-Service BW 
-|-----------------|---------------|----------------|---------|-------------------|------------------|---------------------|--------------------|
1 managed_service_1 DCS-7050CX3-32S ethernet2/4True25Gbps25Gbps 624Kbps 432Mbps

Using the GUI to Configure Session Slicing

Perform the following steps to configure session slicing.
  1. Navigate to Monitoring > Managed Services > Managed Services
    .
    Figure 42. Managed Services
  2. Select the + icon to create a new managed service.
    Figure 43. Create Managed Service
  3. Enter a Name for the managed service.
    Figure 44. Managed Service Name
  4. Select a Switch from the drop-down list.
    Figure 45. Manage Service Switch
    Figure 46. Managed Service Switch Added
  5. Select an Interface from the drop-down list.
    Figure 47. Managed Service Interface Added
  6. Select Actions or Next.
     
    Figure 48. Actions Menu
  7. Click the + icon to select a managed service action.
    Figure 49. Configure Managed Service Action List
  8. Choose Session Slice from the drop-down list. Adjust the Slice After and Idle Timeout parameters, as required.
     
    Figure 50. Configure Managed Service Action Session Slice
  9. Select Append and then Save to add the session slice managed service.
    Figure 51. Managed Service Session Slice

Timestamp Action

The timestamp service action identifies and timestamps every packet it receives with the time the service node receives the packet for matching traffic.

GUI Configuration

Figure 52. Create Managed Service: Timestamp Action

CLI Configuration

! managed-service
managed-service MS-TIMESTAMP-1
1 timestamp
service-interface switch CORE-SWITCH-1 ethernet15/3

UDP-replication Action

The UDP-replication service action copies UDP messages, such as Syslog or NetFlow messages, and sends the copied packets to a new destination IP address.

Configure a rate limit when enabling UDP replication. When upgrading from a version of DANZ Monitoring Fabric (DMF) before release 6.3.1, the UDP-replication configuration is not applied until a rate limit is applied to the delivery interface.

The following example illustrates applying a rate limit to a delivery interface used for UDP replication:
CONTROLLER-1(config)# switch DMF-DELIVERY-SWITCH-1
CONTROLLER-1(config-switch)# interface ethernet1
CONTROLLER-1(config-switch-if)# role delivery interface-name udp-delivery-1
CONTROLLER-1(config-switch-if)# rate-limit 256000
Note: No other service action can be applied after a UDP-replication service action.

GUI Configuration

Use the UDP-replication service to copy UDP traffic, such as Syslog messages or NetFlow packets, and send the copied packets to a new destination IP address. This function sends traffic to more destination syslog servers or NetFlow collectors than would otherwise be allowed.

Enable the checkbox for the destination for the copied output, or click the provision control (+) and add the IP address in the dialog that appears.
Figure 53. Configure Output Packet Destination IP

For the header-strip service action only, configure the policy rules for matching traffic after applying the header-strip service action. After completing pages 1-4, click Append and enable the checkbox to apply the policy.

Click Save to save the managed service.

CLI Configuration

Enter the 1 udp-replicate command and identify the configuration name (the submode changes to the config-managed-srv-udp-replicate submode) to view and configure a specific UDP-replication configuration.
controller-1(config)# managed-service MS-UDP-REPLICATE-1
controller-1(config-managed-srv)# 1 udp-replicate DELIVERY-INTF-TO-COLLECTOR
controller-1(config-managed-srv-udp-replicate)#
From this submode, define the destination address of the packets to copy and the destination address for sending the copied packets.
controller-1(config-managed-srv-udp-replicate)# in-dst-ip 10.1.1.1
controller-1(config-managed-srv-udp-replicate)# out-dst-ip 10.1.2.1

Redundancy of Managed Services in Same DMF Policy

In this method, users can use a second managed service as a backup service in the same DANZ Monitoring Fabric (DMF) policy. The backup service is activated only when the primary service becomes unavailable. The backup service can be on the same service node or core switch or a different service node and core switch.
Note: Transitioning from active to backup managed service requires reprogramming switches and associated managed appliances. This reprogramming, done seamlessly, will result in a slight traffic loss.

Using the GUI to Configure a Backup Managed Service

To assign a managed service as a backup service in a DANZ Monitoring Fabric (DMF) policy, perform the following steps:
  1. Select Monitoring > Policies and click the Provision control (+) to create a new policy.
  2. Configure the policy as required. From the Services section, click the Provision control (+) in the Managed Services table.
    Figure 54. Policy with Backup Managed Service
  3. Select the primary managed service from the Managed Service selection list.
  4. Select the backup service from the Backup Service selection list and click Append.

Using the CLI to Configure a Backup Managed Service

To implement backup-managed services, perform the following steps:
  1. Identify the first managed service.
    managed-service MS-SLICE-1
    1 slice l3-header-start 20
    service-interface switch CORE-SWITCH-1 lag1
  2. Identify the second managed service.
    managed-service MS-SLICE-2
    1 slice l3-header-start 20
    service-interface switch CORE-SWITCH-1 lag2
  3. Configure the policy referring to the backup managed service.
    policy SLICE-PACKETS
    action forward
    delivery-interface TOOL-PORT-1
    filter-interface TAP-PORT-1
    use-managed-service MS-SLICE-1 sequence 1 backup-managed-service MS-SLICE-2
    1 match ip

Application Identification

The DANZ Monitoring Fabric (DMF) Application Identification feature allows for the monitoring of applications identified with Deep Packet Inspection (DPI) into packet flows received via filter interfaces and generates IPFIX flow records. These IPFIX flow records are transmitted to a configured collector device via the L3 delivery interface. The feature provides a filtering function by forwarding or dropping packets from specific applications before sending the packet to the analysis tools.
Note: Application identification is supported on R640 Service Nodes (DCA-DM-SC and DCA-DM-SC2) and R740 Service Nodes (DCA-DM-SDL and DCA-DM-SEL).

Using the CLI to Configure app-id

Perform the following steps to configure app-id.
  1. Create a managed service and enter the service interface.
  2. Choose the app-id managed service using the <seq num> app-id command.
    Note: The above command should enter the app-id submode, which supports two configuration parameters: collector and l3-delivery-interface. Both are required.
  3. To configure the IPFIX collector IP address, enter the following command: collector ip-address.
    The UDP port and MTU parameters are optional; the default values are 4739 and 1500, respectively.
  4. Enter the command: l3-delivery-interface delivery interface name to configure the delivery interface.
  5. Add this managed service to a policy. The policy will not have a physical delivery interface.
The following shows an example of an app-id configuration that sends IPFIX application records to the Collector (analytics node) at IP address 192.168.1.1 over the configured delivery interface named app-to-analytics:
managed-service ms
service-interface switch core1 ethernet2
!
1 app-id
collector 192.168.1.1
l3-delivery-interface app-to-analytics

After configuring the app-id, refer to the analytics node for application reports and visualizations. For instance, a flow is classified internally with the following tuple: ip, tcp, http, google, and google_maps. Consequently, the analytics node displays the most specific app ID for this flow as google_maps under appName.

On the Analytics Node, there are AppIDs 0-4 representing applications according to their numerical IDs. 0 is the most specific application identified in that flow, while 4 is the least. In the example above, ID 0 would be the numerical ID for google_maps, ID 1 google, ID 2 http, ID 3 tcp, and ID 4 IP address. Use the appName in place of these since these require an ID to name mapping to interpret.

Using the CLI to Configure app-id-filter

Perform the following steps to configure app-id-filter:
  1. Create a managed service and enter the service interface.
  2. Choose the app-id managed service using the <seq num> app-id-filter command.
    Note: The above command should enter the app-id-filter submode, which supports three configuration parameters: app, app-category, and filter-mode. The category app is required, while app-category and filter-mode are optional. The option filter-mode has a default value of forward.
  3. Enter the command: app application name to configure the application name.
    Tip: Press the Tab key after entering the app keyword to see all possible application names. Type in a partial name and press the Tab to see all possible choices to auto-complete the name. The application name provided must match a name in this list of app names. A service node must be connected to the Controller for this list to appear. Any number of apps can be entered one at a time using the app application-name command. An example of a (partial) list of names:
    dmf-controller-1 (config-managed-srv-app-id-filter)# app ibm
    ibm ibm_as_central ibm_as_dtaqibm_as_netprt ibm_as_srvmap ibm_iseries ibm_tsm
    ibm_app ibm_as_databaseibm_as_fileibm_as_rmtcmd ibm_db2 ibm_tealeaf
  4. Filter applications by category using the app-category category name command. Currently, the applications contained in these categories are not displayed.
    dmf-controller-1(config-managed-srv-app-id-filter)# app-category
    <Category> <String> :<String>
    aaaCategory selection
    adult_contentCategory selection
    advertisingCategory selection
    aetlsCategory selection
    analyticsCategory selection
    anonymizer Category selection
    audio_chat Category selection
    basicCategory selection
    blog Category selection
    cdnCategory selection
    certif_authCategory selection
    chat Category selection
    classified_ads Category selection
    cloud_services Category selection
    crowdfunding Category selection
    cryptocurrency Category selection
    db Category selection
    dea_mail Category selection
    ebook_reader Category selection
    educationCategory selection
    emailCategory selection
    enterprise Category selection
    file_mngtCategory selection
    file_transferCategory selection
    forumCategory selection
    gaming Category selection
    healthcare Category selection
    im_mcCategory selection
    iotCategory selection
    map_serviceCategory selection
    mm_streaming Category selection
    mobile Category selection
    networking Category selection
    news_portalCategory selection
    p2pCategory selection
    payment_serviceCategory selection
    remote_accessCategory selection
    scadaCategory selection
    social_network Category selection
    speedtestCategory selection
    standardized Category selection
    transportation Category selection
    update Category selection
    video_chat Category selection
    voip Category selection
    vpn_tunCategory selection
    webCategory selection
    web_ecom Category selection
    web_search Category selection
    web_sitesCategory selection
    webmailCategory selection
  5. The filter-mode parameter supports two modes: forward and drop. Enter filter-mode forward to allow the packets to be forwarded based on the configured applications. Enter filter-mode drop to drop these packets.
    An example of an app-id-filter configuration that drops all Facebook and IBM Tealeaf packets:
    managed-service MS
    	service-interface switch CORE-SWITCH-1 ethernet2
    	!
    	1 app-id-filter
    		app facebook
    		app ibm_tealeaf
    filter-mode drop
CAUTION: The app-id-filter configuration filters based on flows. For example, if a session is internally identified with the following tuple: ip, tcp, http, google, or google_maps, adding any of these parameters to the filter list permits or drops all the packets matching after determining classification (e.g., adding tcp to the filter list permits or blocks packets from the aforementioned 5-tuple flow as well as all other tcp flows). Use caution when filtering using the lower-layer protocols and apps. Also, when forwarding an application, packets will be dropped at the beginning of the session until the application is identified. When dropping, packets at the beginning of the session will be passed until the application is identified.

Using the CLI to Configure app-id and app-id-filter Combined

Follow the configuration steps described in the services earlier to configure app-id-filter and app-id together. However, in this case, app-id should use a higher seq num than app-id-filter. Thus, the traffic is processed through the app-id-filter policy first, then through app-id.

This behavior can be helpful to monitor certain types of traffic. The following example illustrates a combined app-id-filter and app-id configuration.
! managed-service
managed-service MS1
service-interface switch CORE-SWITCH-1 ethernet2
!
!
1 app-id-filter
app facebook
filter-mode forward
!
2 app-id
collector 1.1.1.1
l3-delivery-interface L3-INTF-1
Note: The two drawbacks of this configuration are app-id dropping all traffic except facebook, and this type of service chaining can cause a performance hit and high memory utilization.

Using the GUI to Configure app-id and app-id-filter

App ID and App ID Filter are in the Managed Service workflow. Perform the following steps to complete the configuration.
  1. Navigate to the Monitoring > Managed Services page. Select the table action + icon button to add a new managed service.
    Figure 55. DANZ Monitoring Fabric (DMF) Managed Services
  2. Configure the Name, Switch, and Interface inputs in the Info step.
    Figure 56. Info Step
  3. In the Actions step, select the + icon to add a new managed service action.
    Figure 57. Add App ID Action
  4. To Add the App ID Action, select App ID from the action selection input:
    Figure 58. Select App ID
  5. Fill in the Delivery Interface, Collector IP, UDP Port, and MTU inputs and select Append to include the action in the managed service:
    Figure 59. Delivery Interface
  6. To Add the App ID Filter Action, select App ID Filter from the action selection input:
    Figure 60. Select App ID Filter
  7. Select the Filter input as Forward or Drop action:
    Figure 61. Select Filter Input
  8. Use the App Names section to add app names.
    1. Select the + button to open a modal pane to add an app name.

    2. The table lists all app names. Use the text search to filter out app names. Select the checkbox for app names to include and click Append Selected.

    3. Repeat the above step to add more app names as necessary.

    Figure 62. Associate App Names
  9. The selected app names are now listed. Use the - icon button to remove any app names, if necessary:
    Figure 63. Application Names
  10. Select the Append button to add the action to the managed service and Save to save the managed service.
For existing managed services, add App ID or App ID Filter using the Edit workflow of a managed service.

Dynamic Signature Updates (Beta Version)

This beta feature allows the app-id and app-id-filter services to classify newly supported applications at runtime rather than waiting for an update in the next DANZ Monitoring Fabric (DMF) release. Perform such runtime service updates during a maintenance cycle. There can be issues with backward compatibility if attempting to revert to an older bundle. Adopt only supported versions. In the Controller’s CLI, perform the following recommended steps:
  1. Remove all policies containing app-id or app-id-filter. Remove the app-id and app-id-filter managed services from the policies using the command: no use-managed-service in policy config.
    Arista Networks recommends this step to avoid errors and service node reboots during the update process. A warning message is printed right before confirming a push. Proceeding without this step may work but is not recommended as there is a risk of service node reboots.
    Note: Arista Networks provides the specific update file in the command example below.
  2. To pull the signature file onto the Controller node, use the command:
    dmf-controller-1(config)# app-id pull-signature-file user@host:path to file.tar.gz
    Password:
    file.tar.gz							5.47MB 1.63MBps 00:03
  3. Fetch and validate the file using the command:
    dmf-controller-1(config)# app-id fetch-signature-file file://file.tar.gz
    Fetch successful.
    Checksum : abcdefgh12345
    Fetch time : 2023-08-02 22:20:49.422000 UTC
    Filename : file.tar.gz
  4. To view files currently saved on the Controller node after the fetch operation is successful, use the following command:
    dmf-controller-1(config)# app-id list-signature-files
    # Signature-file	Checksum 		Fetch time
    -|-----------------|-----------------|------------------------------|
    1 file.tar.gz	abcdefgh12345	2023-08-02 22:20:49.422000 UTC
    Note: Only the files listed by this command can be pushed to service nodes.
  5. Push the file from the Controller to the service nodes using the following command:
    dmf-controller-1(config)# app-id push-signature-file file.tar.gz
    App ID update: WARNING: This push will affect all service nodes
    App ID update: Remove policies configured with app-id or app-id-filter before continuing to avoid errors
    App ID update: Signature file: file.tar.gz
    App ID update: Push app ID signatures to all Service Nodes? Update ("y" or "yes" to continue): yes
    Push successful.
    
    Checksum : abcdefgh12345
    Fetch time : 2023-08-02 22:20:49.422000 UTC
    Filename : file.tar.gz
    Sn push time : 2023-08-02 22:21:49.422000 UTC
  6. Add the app-id and app-id-filter managed services back to the policies.
    As a result of adding app-id, service nodes can now identify and report new applications to the analytics node.
    After adding back app-id-filter, new application names should appear in the app-id-filter Controller app list. To test this, enter app-id-filter submode and press the Tab to see the full list of applications. New identified applications should appear in this list.
  7. To delete a signature file from the Controller, use the command below.
    Note: DMF only allows deleting a signature file that is not actively in use by any service node, which needs to keep a working file in case of issues—attempting to delete an active file causes the command to fail.
    dmf-controller-1(config)# app-id delete-signature-file file.tar.gz
    Delete successful for file: file.tar.gz
Useful Information
The fetch and delete operations are synced with standby controllers as follows:
  • fetch: after a successful fetch on the active Controller, it invokes the fetch RPC on the standby Controller by providing a signed HTTP URL as the source. This URL points to an internal REST API that provides the recently fetched signature file.
  • delete: the active Controller invokes the delete RPC call on the standby controllers.

The Controller stores the signature files in this location: /var/lib/capture/appidsignatureupdate.

On a service node, files are overwritten and always contain the complete set of applications.
Note: An analytics node cannot display these applications in the current version.
This step is only for informational purposes:
  • Verify the bundle version on the service node by entering the show service-node app-id-bundle-version command in the service node CLI, as shown below.
    Figure 64. Before Update
    Figure 64. After Update

CLI Show Commands

Service Node

In the service node CLI, use the following show command:
show service-node app-id-bundle-version
This command shows the version of the bundle in use. An app-id or app-id-filter instance must be configured, or an error message is displayed.
dmf-servicenode-1# show app-id bundle-version
Name : bundle_version
Data : 1.680.0-22 (build date Sep 26 2023)
dmf-servicenode-1#

Controller

To obtain more information about the running version on a Service Node, or when the last push attempt was made and the outcome, use the following Controller CLI commands:

  • show app-id push-results optional SN name
  • show service-node SN name app-id
dmf-controller-1# show app-id push-results
# NameIP Address Current Version Current Push TimePrevious Version Previous Push Time Last Attempt Version Last Attempt TimeLast Attempt Result Last Attempt Failure Reason

-|-----------------|--------------|---------------|------------------------------|----------------|------------------------------|--------------------|------------------------------|-------------------|---------------------------|

1 dmf-servicenode-1 10.240.180.124 1.660.2-332023-12-06 11:13:36.662000 PST 1.680.0-22 2023-09-29 16:21:11.034000 PDT 1.660.2-33 2023-12-06 11:13:34.085000 PST success
dmf-controller-1# show service-node dmf-servicenode-1 app-id
# NameIP Address Current Version Current Push TimePrevious Version Previous Push Time Last Attempt Version Last Attempt Time Last Attempt Result Last Attempt Failure Reason

-|-----------------|--------------|---------------|------------------------------|----------------|------------------|--------------------|-----------------|-------------------|---------------------------|

1 dmf-servicenode-1 10.240.180.124 1.680.0-222023-09-29 16:21:11.034000 PDT
The show app-id signature-files command displays the validated files that are available to push to Service Nodes.
dmf-controller-1# show app-id signature-files
# Signature-fileChecksum 	Fetch time
-|-----------------|-----------------|------------------------------|
1 file1.tar.gzabcdefgh12345 2023-08-02 22:20:49.422000 UTC
2 file2.tar.gzijklmnop67890 2023-08-03 07:10:22.123000 UTC
The show analytics app-info filter-interface-name command displays aggregated information over the last 5 minutes about the applications seen on a given filter interface, sorted by unique flow count. This command also has an optional size option to limit the number of results, default is all.
Note: This command only works in push-per-filter mode.
dmf-controller-1# show analytics app-info filter-interface f1 size 3
# App name Flow count
-|--------|----------|
1 app1 1000
2 app2 900
3 app3 800

Syslog Messages

Syslog messages for configuring the app-id and app-id-filter services appear in a service node’s syslog through journalctl.

A Service Node syslog registers events for the app-id add, modify, and delete actions.

These events contain the keywords dpi and dpi-filter, which correspond to app-id and app-id-filter.

For example:

Adding dpi for port, 
Modifying dpi for port, 
Deleting dpi for port,
Adding dpi filter for port, 
Modifying dpi filter for port, 
Deleting dpi filter for port, 
App appname does not exist - An invalid app name was entered.

The addition, modification, or deletion of app names in an app-id-filter managed-service in the Controller node’s CLI influences the policy refresh activity, and these events register in floodlight.log.

Scale

  • Max concurrent sessions are currently set to permit less than 200,000 active flows per core. Performance may drop the more concurrent flows there are. This value is a maximum value to prevent the service from overloading. Surpassing this threshold may cause some flows not to be processed, and the new flows will not be identified or filtered. Entries for inactive flows will time out after a few minutes for ongoing sessions and a few seconds after the session ends.
  • If there are many inactive sessions, DMF holds the flow contexts, reducing the number of available flows used for DPI. The timeouts are approximately 7 minutes for TCP sessions and 1 minute for UDP.
  • Heavy application traffic load degrades performance.

Troubleshooting

  • If IPFIX reports do not appear on an Analytics Node (AN) or Collector, ensure the UDP port is configured correctly and verify the AN receives traffic.
  • If the app-id-filter app list does not appear, ensure a Service Node (SN) is connected using the show service-node command on the Controller.
  • For app-id-filter, enter at least one valid application from the list that appears using <Tab>. If not, the policy will fail to install with an error message app-id-filter specified without at least one name TLV identifying application.
  • A flow may contain other IDs and protocols when using app-id-filter. For example, the specific application for a flow may be google_maps, but there may be protocols or broader applications under it, such as ssh, http, or google. Adding google_maps will filter this flow. However, adding ssh will also filter this flow. Therefore, adding any of these to the filter list will cause packets of this flow to be forwarded or dropped.
  • An IPFIX element, BSN type 14, that existed in DMF version 8.4 was removed in 8.6.
  • During a dynamic signature update, if a SN reboot occurs, it will likely boot up with the correct version. To avoid issues of traffic loss, perform the update during a maintenance window. Also, during an update, the SN will temporarily not send LLDP packets to the Controller and disconnect for a short while.
  • After a dynamic signature update, do not change configurations or push another signature file for several minutes. The update will take some time to process. If there are any VFT changes, it may lead to warning messages in floodlight, such as:
    Sync job 2853: still waiting after 50002 ms 
    Stuck switch update: R740-25G[00:00:e4:43:4b:bb:38:ca], duration=50002ms, stage=COMMIT

    These messages may also appear when configuring DPI on a large number of ports.

Limitations

  • When using a drop filter, a few packets may slip through the filter before determining an application ID for a flow, and when using a forward filter, a few packets may not be forwarded. Such a small amount is estimated to be between 1 and 6 packets at the beginning of a flow.
  • When using a drop filter, add the unknown app ID to the filter list to drop any unidentified traffic if these packets are unwanted.
  • The Controller must be connected to a Service Node for the app-id-filter app list to appear. If the list does not appear and the application names are unknown, use the app-id to send reports to the analytics node. Use the application names seen there to configure an app-id-filter. The name must match exactly.
  • Since app-category does not currently show the applications included in that category, do not use it when targeting specific apps. Categories like basic, which include all basic networking protocols like TCP and UDP, may affect all flows.
  • For app-id, a report is only generated for a fully classified flow after that flow has been fully classified. Therefore, the number of reported applications may not match the total number of flows. These reports are sent after enough applications are identified on the Service Node. If many applications are identified, DMF sends the reports quickly. However, DMF sends these reports every 10 seconds when identifying only a few applications.
  • DMF treats a bidirectional flow as part of the same n-tuple. As such, generated reports contain the client's source IP address and the server's destination IP address.
  • While configuring many ports with the app-id, there may occasionally be a few Rx drops on the 16 port machines at a high traffic rate in the first couple of seconds.
  • The feature uses a cache that maps dest ip and port to the application. Caching may vary the performance depending on the traffic profile.
  • The app-id and app-id-filter services are more resource-intensive than other services. Combining them in a service chain or configuring many instances of them may lead to degradation in performance.
  • At scale, such as configuring 16 ports on the R740 DCA-DM-SEL, app-id may take a few minutes to set up on all these ports, and this is also true when doing a dynamic signature update.
  • The show analytics app-info command only works in push-per-filter VLAN mode.

Redundancy of Managed Services Using Two DMF Policies

In this method, users can employ a second policy with a second managed service to provide redundancy. The idea here is to duplicate the policies but assign a lower policy priority to the second DANZ Monitoring Fabric (DMF) policy. In this case, the backup policy (and, by extension, the backup service) will always be active but only receive relevant traffic once the primary policy goes down. This method provides true redundancy at the policy, service-node, and core switch levels but uses additional network and node resources.

Example
! managed-service
managed-service MS-SLICE-1
1 slice l3-header-start 20
service-interface switch CORE-SWITCH-1 lag1
!
managed-service MS-SLICE-2
1 slice l3-header-start 20
service-interface switch CORE-SWITCH-1 lag2
! policy
policy ACTIVE-POLICY
priority 101
action forward
delivery-interface TOOL-PORT-1
filter-interface TAP-PORT-1
use-managed-service MS-SLICE-1 sequence 1
1 match ip
!
policy BACKUP-POLICY
priority 100
action forward
delivery-interface TOOL-PORT-1
filter-interface TAP-PORT-1
use-managed-service MS-SLICE-2 sequence 1
1 match ip

Cloud Services Filtering

The DANZ Monitoring Fabric (DMF) supports traffic filtering to specific services hosted in the public cloud and redirecting filtered traffic to customer tools. DMF achieves this functionality by reading the source and destination IP addresses of specific flows, identifying the Autonomous System number they belong to, tagging the flows with their respective AS numbers, and redirecting them to customer tools for consumption.

The following is the list of services supported:

  • amazon: traffic with src/dst IP belonging to Amazon
  • ebay: traffic with src/dst IP belonging to eBay
  • facebook: traffic with src/dst IP belonging to FaceBook
  • google: traffic with src/dst IP belonging to Google
  • microsoft: traffic with src/dst IP belonging to Microsoft
  • netflix: traffic with src/dst IP belonging to Netflix
  • office365: traffic for Microsoft Office365
  • sharepoint: traffic for Microsoft Sharepoint
  • skype: traffic for Microsoft Skype
  • twitter: traffic with src/dst IP belonging to Twitter
  • default: traffic not matching other rules in this service. Supported types are match or drop.

The option drop instructs the DMF Service Node to drop packets matching the configured application.

The option match instructs the DMF Service Node to deliver packets to the delivery interfaces connected to the customer tool.

A default drop action is auto-applied as the last rule, except when configuring the last rule as match default. It instructs the DMF Service Node to drop packets when either of the following conditions occurs:
  • The stream's source IP address or destination IP address doesn't belong to any AS number.
  • The stream's source IP address or destination IP address is affiliated with an AS number but has no specific action set.

Cloud Services Filtering Configuration

Managed Service Configuration
Controller(config)# managed-service <name>
Controller(config-managed-srv)#
Service Action Configuration
Controller(config-managed-srv)# 1 app-filter
Controller(config-managed-srv-appfilter)#
Filter Rules Configuration
Controller(config-managed-srv-appfilter)# 1 drop sharepoint
Controller(config-managed-srv-appfilter)# 2 match google
Controller(config-managed-srv-appfilter)# show this
! managed-service
managed-service sf3
service-interface switch CORE-SWITCH-1 ethernet13/1
!
1 service- app-filter
1 drop sharepoint
2 match google
A policy having a managed service with app-filter as the managed service, but with no matches specified will fail to install. The example below shows a policy incomplete-policy having failed due to the absence of a Match/Drop rule in the managed service incomplete-managed-service.
Controller(config)# show running-config managed-service incomplete-managed-service
! managed-service
managed-service incomplete-managed-service
1 app-filter
Controller(config)# show running-config policy R730-sf3
! policy
policy incomplete-policy
action forward
delivery-interface TOOL-PORT-1
filter-interface TAP-PORT-1
use-managed-service incomplete-managed-service sequence 1
1 match any
Controller(config-managed-srv-appfilter)# show policy incomplete-policy
Policy Name : incomplete-policy
Config Status : active - forward
Runtime Status : one or more required service down
Detailed Status : one or more required service down - installed to
forward
Priority : 100
Overlap Priority : 0

Multiple Services Per Service Node Interface

The service-node capability is augmented to support more than one service action per service-node interface. Though this feature is economical regarding per-interface cost, it could cause packet drops in high-volume traffic environments. Arista Networks recommends using this feature judiciously.

Example
controller-1# show running-config managed-service Test
! managed-service
managed-service Test
service-interface switch CORE-SWITCH-1 ethernet13/1
1 dedup full-packet window 2
2 mask BIGSWITCH
3 slice l4-payload-start 0
!
4 netflow an-collector
collector 10.106.6.15 udp-port 2055 mtu 1500
This feature replaces the service-action command with sequential numbers. The allowed range of sequence numbers is 1 -20000. In the above example, the sequence numbering impacts the order in which the managed services influence the traffic.
Note: After upgrading to DANZ Monitoring Fabric (DMF) release 8.1.0 and later, the service-action CLI is automatically replaced with sequence number(s).
Specific managed service statistics can be viewed via the following CLI command:
When using the DMF GUI, view the above information in Monitoring > Managed Services > Devices > Service Stats.
Note: The following limitations apply to this mode of configuration:
  • The NetFlow/IPFIX-action configuration should not be followed by the timestamp service action.
  • Ensure the UDP-replication action configuration is the last service in the sequence.
  • The header-stripping service with post-service-match rule configured should not be followed by the NetFlow, IPFIX, udp-replication, timestamp and TCP-analysis services.
  • When configuring a header strip and slice action, the header strip action must precede the slice action.

Sample Service

The Service Node forwards packets based on the max-tokens and tokens-per-refresh parameters using the DANZ Monitoring Fabric (DMF) Sample Service feature. The sample service uses one token to forward one packet.

After consuming all the initial tokens from the max-tokens bucket, the system drops subsequent packets until the max-tokens bucket refills using the tokens-per-refresh counter at a recurring predefined time interval of 10ms. Packet sizes do not affect this service.

Arista Networks recommends keeping the tokens-per-refresh value at or below max-tokens. For example, max-tokens = 1000 and tokens-per-refresh = 500.

Setting the max-tokens value to 1000 means that the initial number of tokens is 1000, and the maximum number of tokens stored at any time is 1000.

The max-tokens bucket will be zero when the Service Node has forwarded 1000 packets before the first 10 ms period ends, leading to a situation where the Service Node is no longer forwarding packets. After every 10ms time interval, if the tokens-per-refresh value is set to 500, the max-tokens bucket is refilled using the tokens-per-refresh configured value, 500 tokens in this case, to pass packets the service tries to use immediately.

Suppose the traffic rate is higher than the refresh amount added. In that case, available tokens will eventually drop back to 0, and every 10ms, only 500 packets will be forwarded, with subsequent packets being dropped.

If the traffic rate is lower than the refresh amount added, a surplus of tokens will result in all packets passing. Since the system only consumes some of the tokens before the next refresh interval, available tokens will accumulate until they reach the max-tokens value of 1000. After 1000, the system does not store any surplus tokens above the max-tokens value.

To estimate the maximum possible packets passed per second (pps), use the calculation (1000ms/10ms) * tokens-per-refresh and assume the max-tokens value is larger than tokens-per-refresh. For example, if the tokens-per-refresh value is 5000, then 500000 pps are passed.

The Sample Service feature can be used as a standalone Managed Service or chained with other Managed Services.

Use Cases and Compatibility

  • Applies to Service Nodes
  • Limit traffic to tools that cannot handle a large amount of traffic.
  • Use the Sample Service before another managed service to decrease the load on that service.
  • The Sample Service is applicable when needing only a portion of the total packets without specifically choosing which packets to forward.

Sample Service CLI Configuration

  1. Create a managed service and enter the service interface.
  2. Choose the sample managed service with the seq num sample command.
    1. There are two required configuration values: max-tokens and tokens-per-refresh. There are no default values, and the service requires both values.
    2. The max-tokens value is the maximum size of tokens in the token bucket. The service will start with the number of tokens specified when first configured. Each packet passed consumes one token. If no tokens remain, packet forwarding stops. Configure the max-tokens value from a range of 1 to the maximum uint64 (unsigned integer) value of 9,223,372,036,854,775,807.
    3. DMF refreshes the token bucket every 10 ms. The tokens-per-refresh value is the number of tokens added to the token bucket on each refresh. Each packet passed consumes one token, and when the number of tokens drops to zero, the system drops all subsequent packets until the next refresh. The number of tokens in the bucket cannot exceed the value of max-tokens. Configure the tokens-per-refresh value from a range of 1 to the maximum uint64 (unsigned integer) value of 9,223,372,036,854,775,807.
    The following example illustrates a typical Sample Service configuration
    dmf-controller-1(config-managed-srv-sample)# show this
    ! managed-service
    managed-service MS
    !
    3 sample
    max-tokens 50000
    tokens-per-refresh 20000
  3. Add the managed service to the policy.

Show Commands

Use the show running-config managed-service sample_service_name command to view pertinent details. In this example, the sample_service_name is techpubs.

DMF-SCALE-R450> show running-config managed-service techpubs

! managed-service
managed-service techpubs
!
1 sample
max-tokens 1000
tokens-per-refresh 500
DMF-SCALE-R450>

Sample Service GUI Configuration

Use the following steps to add a Sample Service.
  1. Navigate to the Monitoring > Managed Services page.
    Figure 66. DMF Managed Services
  2. Under the Managed Services section, click the + icon to create a new managed service. Go to the Actions, and select the Sample option in the Action drop-down. Enter values for Max tokens and Tokens per refresh.
    Figure 67. Configure Managed Service Action
  3. Click Append and then Save.

Troubleshooting Sample Service

Troubleshooting

  • If the number of packets forwarded by the Service Node interfaces is few, the max-tokens and tokens-per-refresh values likely need to be higher.
  • If fewer packets than the tokens-per-refresh value forward, ensure the max-tokens value is larger than the tokens-per-refresh value. The system discards any surplus refresh tokens above the max-tokens value.
  • When all traffic forwards, the initial max-tokens value is too large, or the tokens refreshed by tokens-per-refresh are higher than the packet rate.
  • When experiencing packet drops after the first 10ms post commencement of traffic, it may be due to a low tokens-per-refresh value. For example, calculate the minimum value of max-tokens and tokens-per-refresh that would lead to forwarding all packets.

Calculation Example

Traffic Rate : 400 Mbps
Packet Size - 64 bytes
400 Mbps = 400000000 bps
400000000 bps = 50000000 Bps
50000000 Bps = 595238 pps (Includes 20 bytes of inter packet gap in addition to the 64 bytes)
1000 ms = 595238 pps
1 ms = 595.238 pps
10 ms = 5952 pps
max-tokens : 5952 (the minimum value)
tokens-per-refresh : 5952 ( the minimum value)

Limitations

  • In the current implementation, the Service Sample action is bursty. The token consumption rate is not configured to withhold tokens over time, so a large burst of incoming packets can immediately consume all the tokens in the bucket. There is currently no way to select what traffic is forwarded or dropped; it only depends on when the packets arrive concerning the refresh interval.
  • Setting the max-tokens and tokens-per-refresh values too high will forward all packets. The maximum value is 9,223,372,036,854,775,807, but Arista Networks recommends staying within the maximum values stated under the description section.

Flow Diff Latency and Drop Analysis

Latency and drop information help determine if there is a loss in a particular flow and where the loss occurred. A Service Node action configured as a DANZ Monitoring Fabric (DMF) managed service has two separate taps or spans in the production network and can measure the latency of a flow traversing through these two points. It can also detect packet drops between two points in the network if the packet only appears on one point within a specified time frame, currently set to 100ms.

Latency and drop analysis require PTP time-stamped packets. The DMF PTP timestamping feature can do this as the packets enter the monitoring fabric, or the production network switches can also timestamp the packet.

The Service Node accumulates latency values by flow and sends IPFIX data records with each flow's 5-tuple and ingress and egress identifiers. It sends IPFIX data records to the Analytics Node after collecting a specified number of values for a flow or when a timeout occurs for the flow entry. The threshold count is 10,000, and the flow timeout is 4 seconds.

Note: Only basic statistics are available: min, max, and mean. Use the Analytics Node to build custom dashboards to view and check the data.

Use the DMF Analytics Node to build custom dashboards to view and check the data.

Attention: The flow diff latency and drop analysis feature is switch dependent and requires PTP timestamping. It is supported on 7280R3 switches.

Configure Flow Diff Latency and Drop Analysis Using the CLI

Configure this feature through the Controller as a Service Node action in managed services using the managed service action flow-diff.

Latency configuration configures multiple tap point pairs to analyze latency or drops between which analysis of latency or drops occurs. A tap point pair comprises a source and a destination tap point, identified by the filter interface, policy name, or filter interface group. Based on the latency configuration, configuring the Service Node with traffic metadata tells the Service Node where to look for tap point information, timestamps, and the IPFIX collector.

Configure appropriate DMF Policies such that traffic tapped from tap point pairs in the network is delivered to the configured Service Node interface for analysis.

Configuration Steps for flow-diff.

  1. Create a managed service and enter the service interface.
  2. Choose the flow-diff service action with the command: seq num flow-diff
    Note: The command should enter the flow-diff submode, which supports three configuration parameters: collector, l3-delivery-interface, and tap-point-pair. These all are required parameters.
  3. Configure the IPFIX collector IP address by entering the following command: collector ip-address (the UDP port and MTU parameters are optional; the default values are 4739 and 1500, respectively).
  4. Configure the delivery interface by entering the command l3-delivery-interface delivery interface name.
  5. Configure the points for flow-diff and drop analysis using tap-point-pair parameters as specified in the following section. Multiple options to identify the tap-point include filter-interface, filter-interface-group, and policy-name. This command will require a source and a destination tap point.
  6. Optional parameters are latency-table-size, sample-count-threshold, packet-timeout, and flow-timeout. The default values are large, 10000, 100 ms, and 4000 ms, respectively.
  7. latency-table-size determines the memory footprint of flow-diff action on the Service Node.
  8. sample-count-threshold specifies the number of samples needed to generate a latency report. Every time a packet times out, it generates a sample for that flow. DMF generates a report if the flow reaches the sample threshold and resets the flow stats.
  9. packet-timeout is the time interval in which timestamps are collected for a packet. It must be larger than the time it takes the same packet to appear at all tap points. Every timeout generates a sample for the flow associated with the packet.
  10. flow-timeout is the time after which, when the flow no longer receives any packets, the flow will be evicted, and a report for the flow is generated. The timeout for a flow refreshes each time a new packet is received. If packets are continuously received for a flow below the flow timeout value, then the flow will never be evicted.
The following example illustrates configuring flow-diff using the steps mentioned earlier:
dmf-controller-1(config)# managed-service managed_service_1
dmf-controller-1(config-managed-srv)# service-interface switch delivery1 ethernet1
dmf-controller-1(config-managed-srv)# 1 flow-diff
dmf-controller-1(config-managed-srv-flow-diff)# collector 192.168.1.1
dmf-controller-1(config-managed-srv-flow-diff)# l3-delivery-interface l3-iface-1
dmf-controller-1(config-managed-srv-flow-diff)# tap-point-pair source filter-interface f1 destination filter-interface f2
dmf-controller-1(config-managed-srv-flow-diff)# latency-table-size small|medium|large
dmf-controller-1(config-managed-srv-flow-diff)# packet-timeout 100
dmf-controller-1(config-managed-srv-flow-diff)# flow-timeout 4000
dmf-controller-1(config-managed-srv-flow-diff)# sample-count-threshold 10000

Configuring Tap Points

Configure tap points using tap-point-pair parameters in the flow-diff submode specifying three identifiers: filter interface name, policy name, and filter-interface-group.
dmf-controller-1(config-managed-srv-flow-diff)# tap-point-pair source <Tab>
filter-interface policy-namefilter-interface-group

Both source and destination tap points must use identifiers compatible; policy-name cannot be used with filter-interface or filter-interface-group.

The filter-interface-group option takes in any configured filter interface group used to represent a collection of tap points in push-per-filter mode. This is an optional command to use when a group of tap points exists, all expecting traffic from the same source or group of source tap points for ease of configuration. For example:

  1. Instead of having two separate tap-point-pairs to represent A → B, A → C, use a filter-interface-group G = [B, C], and only one tap-point-pair A → G.
    dmf-controller-1(config-managed-srv-flow-diff# tap-point-pair source type A destination filter-interface-group G
  2. With a topology like A → C and B → C, configure a filter-interface-group G = [A, B], and tap-point-pair G → C.
    dmf-controller-1(config-managed-srv-flow-diff # tap-point-pair source filter-interface-group G destination type C

There are some restrictions to keep in mind while configuring tap-point-pairs:

  • source and destination must exist and cannot refer to the same tap point
  • You can only configure a maximum of 1024 tap points, therefore a maximum of 512 tap-point-pairs. These limits are not for a single flow-diff managed service but across all managed services with flow-diff action.
  • filter-interface-group must not overlap with other groups within the same managed service and cannot have more than 128 members.
  • When configuring multiple tap-point-pairs using filter-interface and filter-interface-group, an interface part of a filter interface group cannot be used simultaneously as an individual source and destination within the same managed service.

There are several caveats on what is accepted based on the VLAN mode:

Push per Filter

  • Use the filter-interface option to provide the filter interface name.
  • Use the filter-interface-group option to provide the filter interface group name.
  • The policy-name identifier is invalid in this mode.

Push per Policy

  • Accepts a policy-name identifier as a tap point identifier.
  • The filter-interface and filter-interface-group identifiers are invalid in this mode.
  • Policies configured as source and destination tap points within a tap-point-pair must not overlap.

Configuring Policy

Irrespective of the VLAN mode, configure a policy or policies so that the same packet can be tapped from two independent points in the network and then sent to the Service Node.

After creating a policy, add the managed service with flow-diff action as shown below:

dmf-controller-1 (config-policy)# use-managed-service service name sequence 1

There are several things to consider while configuring policies depending on the VLAN mode:

Push per Filter

  • Only one policy can contain the flow-diff service action.
  • A policy should have all filter-interfaces and filter-interface-groups configured as tap points in the flow-diff configuration. Missing filter interfaces and groups from policy may result in drops being reported when in reality we simply won’t be forwarding the packets from one end of the tap-point-pair to the Service Node.
  • It’s also advisable to not add any filter interfaces (groups) that are not in the tap-point-pairs as their latency and drop analysis will not be done, so it will cause unnecessary packets forwarded to the Service Node, which will be reported as unexpected.

Push per Policy

  • Add the flow-diff service to two policies when using policy-name as source and destination identifiers. In this case, there are no restrictions on how many filter interfaces a policy can have.
  • A policy configured as one of the tap points will fail if it overlaps with the other policy in a tap-point-pair, or if the other policy does not exist.
In both VLAN modes, policies must have PTP timestamping enabled. To do so, use the following command:
dmf-controller-1 (config-policy)# use-timestamping

Configuring PTP Timestamping

This feature depends on configuring PTP timestamping for the packet stream going through the tap points. Refer to the Resources section for more information on setting up PTP timestamping functionality.

Show Commands

The following show commands provide helpful information.

The show running-config managed-service managed service command helps check whether the flow-diff configuration is complete.
dmf-controller-1(config)# show running-config managed-service flow-diff 
! managed-service
managed-service flow-diff
service-interface switch DCS-7050CX3-32S ethernet2/4
!
1 flow-diff
collector 192.168.1.1
l3-delivery-interface AN-Data
tap-point-pair source filter-interface ingress destination filter-interface egress
The show managed-services managed service command provides status information about the service.
dmf-controller-1(config)# show managed-services flow-diff 
# Service Name SwitchSwitch Interface Installed Max Post-Service BW Max Pre-Service BW Total Post-Service BW Total Pre-Service BW 
-|------------|---------------|----------------|---------|-------------------|------------------|---------------------|--------------------|
1 flow-diffDCS-7050CX3-32S ethernet2/4True25Gbps25Gbps 624Kbps 432Mbps

The show running-config policy policy command checks whether the policy flow-diff service exists, whether use-timestamping is enabled, and the use of the correct filter interfaces.

The show policy policy command provides detailed information about a policy and whether any errors are related to the flow-diff service. The Service Interfaces tab section shows the packets transmitted to the Service Node and IPFIX packets received from the Service Node.

Note: Regarding two policies in a tap-point-pair (one source and one destination), only one policy will show stats about packets received from the Service Node. This output is by design, as only a single VLAN exists for the IPFIX packets transmitted from the Service Node, so there can only be one policy.
dmf-controller-1 (config)# show policy flow-diff-1
Policy Name: flow-diff-1
Config Status: active - forward
Runtime Status : installed
Detailed Status: installed - installed to forward
Priority : 100
Overlap Priority : 0
# of switches with filter interfaces : 1
# of switches with delivery interfaces : 1
# of switches with service interfaces: 1
# of filter interfaces : 1
# of delivery interfaces : 1
# of core interfaces : 4
# of services: 1
# of pre service interfaces: 1
# of post service interfaces : 1
Push VLAN: 2
Post Match Filter Traffic: 215Mbps
Total Delivery Rate: -
Total Pre Service Rate : 217Mbps
Total Post Service Rate: -
Overlapping Policies : none
Component Policies : none
Runtime Service Names: flow-diff
Installed Time : 2023-11-16 18:15:27 PST
Installed Duration : 19 minutes, 45 secs
~ Match Rules ~
# Rule
-|-----------|
1 1 match any
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Filter Interface(s) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# DMF IF Switch IF NameState Dir PacketsBytes Pkt Rate Bit Rate Counter Reset Time 
-|------|--------|----------|-----|---|--------|-----------|--------|--------|------------------------------|
1 BP17280SR3E Ethernet25 uprx24319476 27484991953 23313215Mbps2023-11-16 18:18:18.837000 PST
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Delivery Interface(s) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# DMF IFSwitchIF NameState Dir Packets BytesPkt Rate Bit Rate Counter Reset Time 
-|-------|---------|----------|-----|---|-------|------|--------|--------|------------------------------|
1 AN-Data 7050SX3-1 ethernet41 uptx81117222 0-2023-11-16 18:18:18.837000 PST
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Service Interface(s) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Service name Role SwitchIF Name State Dir PacketsBytes Pkt Rate Bit Rate Counter Reset Time 
-|------------|------------|---------------|-----------|-----|---|--------|-----------|--------|--------|------------------------------|
1 flow-diffpre-serviceDCS-7050CX3-32S ethernet2/4 uptx23950846 27175761734 23418217Mbps2023-11-16 18:18:18.837000 PST
2 flow-diffpost-service DCS-7050CX3-32S ethernet2/4 uprx81 1175460-2023-11-16 18:18:18.837000 PST
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Core Interface(s) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# SwitchIF NameState Dir PacketsBytes Pkt Rate Bit Rate Counter Reset Time 
-|---------------|----------|-----|---|--------|-----------|--------|--------|------------------------------|
1 7050SX3-1 ethernet7uprx23950773 27175675524 23415217Mbps2023-11-16 18:18:18.837000 PST
2 7050SX3-1 ethernet56 uprx81 1172220-2023-11-16 18:18:18.837000 PST
3 7050SX3-1 ethernet56 uptx23950773 27175675524 23415217Mbps2023-11-16 18:18:18.837000 PST
4 7280SR3EEthernet7uptx24319476 27484991953 23313215Mbps2023-11-16 18:18:18.837000 PST
5 DCS-7050CX3-32S ethernet28 uptx81 1175460-2023-11-16 18:18:18.837000 PST
6 DCS-7050CX3-32S ethernet28 uprx23950846 27175761734 23418217Mbps2023-11-16 18:18:18.837000 PST
~ Failed Path(s) ~
None.

Syslog Messages

The Flow Diff Latency and Drop Analysis feature does not create Syslog messages.

Troubleshooting

Controller

Policies dictate how and what packets are directed to the Service Node. Policies must be able to stream packets from two distinct tap points so that the same packet gets delivered to the Service Node for flow-diff and drop analysis.

Possible reasons for flow-diff and drop analysis not working are:

  • flow-diff action is not added to both policies in a tap-point-pair in push-per-policy mode.
  • flow-diff action exists in a policy that does not have all filter interfaces or groups configured as tap-point-pairs in push-per-filter mode.

A policy programmed to use flow-diff service action can fail for multiple reasons:

  • An incomplete flow-diff configuration exists, there are missing source or destination tap points, or the use of a policy-name identifier in push-per-filter or filter-interface or filter-interface-group in push-per-policy mode.
  • In the push-per-policy mode, the policy doesn’t match the source or destination tap point for any configured tap-point-pair.
  • In the push-per-policy mode, policy names configured as tap points within a tap-point-pair overlap or do not exist.
  • There are more than 512 tap-point-pairs configured or more than 1024 distinct tap points. These limits are global across all flow-diff managed services.
  • filter-interface-groups overlap with each other within the same managed service or have more than 128 group members
  • filter-interface is being used individually as a tap point and as a part of some filter-interface-group within the same managed service

Reasons for failure are available in the runtime state of the policy and viewed using the show policy policy name command.

After verifying the correct configuration of the policy and flow-diff action, enabling the log debug mode and reviewing the floodlight logs (use the fl-log command in bash mode) should show the ipfix-collector, traffic-metadata, and flow-diff gentable entries sent to the Service Node.

If no computed latency reports are generated, this can mean two things. First, not reaching the sample-count-threshold value. Either lower the sample-count-threshold until reports are generated or increase the number of unique packets per flow. Second, there are no evicted flows. The time specified in flow-timeout will refresh every time a new packet is received on that flow before the timeout occurs. If a flow continuously receives packets, it will never time out. Lower the flow-timeout value if the default of 4 seconds causes flows not to expire. After a flow expires, the Controller generates a report for that flow.

For packet-timeout, this value must be larger than the time expected to receive the same packet on every tap-point.

For A->B, if it takes 20ms for the packet to appear at B, packet-timeout must be larger than this time frame to collect timestamps for both these tap points and compute a latency in one sample.

If the Controller generates many unexpected reports, ensure taps are in the correct order in the Controller configuration. If there are many drop reports, ensure the same packet is received at all relevant taps within the packet-timeout window.

Limitations

  • Only 512 tap-point-pairs and 1024 distinct tap points are allowed.
  • filter-interface-group used as a tap point must not overlap with any other group within the same managed service and must not have more than 128 members.
  • A filter interface cannot be used individually as a tap point and as a part of a filter-interface-group simultaneously within the same managed service.
  • There is no chaining if a packet flows through three or more tap points in an A->B->C->...->Z topology. The only computed latency reports will be for tap-point-pairs A->B, A->C, …, and A->Z if these links are specified, but B->C, C->D, etc., will not be computed.
  • Hardware RSS firmware in the Service Node currently cannot parse L2 header timestamps, so packets for all L2 header timestamps are sent to the same lcore; however, RSS does distribute packets correctly to multiple lcores when using src-mac timestamping.
  • PTP timestamping doesn’t allow rewrite-dst-mac, so filter interfaces cannot be used as tap points in push-per-policy mode.
  • In push-per-policy, a policy can only have one filter interface.
  • Each packet from the L3 header and onwards gets hashed to a 64-bit value; if two packets hash to the same value, we assume the underlying packets are the same.
  • Currently, on the flow-diff action in the Service Node, if packets are duplicated so that N copies of the same packet are received:
    • N-1 latencies are computed.
    • The ingress identifier is the earliest timestamp.
  • The system reports timestamps as unsigned 32-bit values, with the maximum timestamp being 2^32-1, corresponding to approximately 4.29 seconds.
  • Only min/mean/max latencies are currently reported.
  • Packets are hashed from the L3 header onwards, meaning if there is any corrupted data past the L3 header, it will lead to drop reports. The same packet must appear at two tap points to generate a computed latency report.
  • In A->B, if B packets appear before A, an unexpected type report is generated.
  • At whichever tap point the packet first appears with the earliest timestamp, it is considered the source.
  • While switching the latency configuration, the system may generate a couple of unexpected or drop reports at the beginning.
  • Users must have good knowledge of network topology when setting up tap points and configuring timeout or sample threshold values. Improper configuration may lead to drop or unexpected reports.

Service Node Management Migration L3ZTN

After the first boot (initial configuration) is completed, in the case of Layer-3 topology mode an administrator can move a Service Node (SN) from an old DANZ Monitoring Fabric (DMF) Controller to a new one via the CLI.

Note:For appliances to connect to the Controller in Layer-3 Zero Touch Network (L3ZTN) mode, you must configure the Controller's deployment mode as pre-configure.

 

To migrate a Service Node's management to a new Controller, follow the steps outlined below:

  1. Remove the Service Node from the old Controller using the following command:
    controller-1(config)# no service-node service-node-1
  2. Connect the data NICs' sni interfaces to the new core fabric switch ports.
  3. SSH to the Service Node and configure the new Controller's IP address using the zerotouch l3ztn controller-ip command:
    service-node-1(config)# zerotouch l3ztn controller-ip 10.2.0.151
  4. Get the management MAC address (of interface bond0) of the Service Node using the following command:
    service-node-1(config)# show local-node interfaces 
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Interfaces~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Interface Master Hardware address Permanent hardware address Operstate Carrier Bond mode Bond role 
    ---------|------|------------------------|--------------------------|---------|-------|-------------|---------|
    bond078:ac:44:94:2b:b6 (Dell)upupactive-backup
  5. Add the Service Node and its bond0 interface's MAC address (obtained in the step above) to the new Controller:
    controller-2(config)# service-node service-node-1
    controller-2(config-service-node)# mac 78:ac:44:94:2b:b6
  6. After associating the Service Node with the new Controller, reboot the Service Node.
  7. Once the Service Node is back online, the Controller should receive a ZTN request. If the Service Node's image differs from the Service Node image file on the new Controller, the mismatch triggers the Service Node to perform an auto-upgrade of the image and to reboot twice.
    controller-2# show zerotouch request
    4178:ac:44:94:2b:b6 (Dell)10.240.156.10 get-manifest2024-06-12 23:14:42.284000 UTC okThe request has succeeded
    5624:6e:96:78:58:b4 (Dell)10.240.156.91 get-manifest2024-06-12 23:13:38.633000 UTC okThe request has succeeded
  8. Then the Service Node should appear as a member of the new DMF fabric, which you can verify by using the following command:
    controller-2# show service-node service-node-1 details

Viewing Information about Monitoring Fabric and Production Networks

This chapter describes to view information about the DANZ Monitoring Fabric (DMF) and connected production networks.

Monitoring DMF Interfaces

Monitoring DANZ Monitoring Fabric (DMF) Interfaces

Using the GUI to Monitor DMF Interfaces

Click the Menu control to view statistics for the specific interface and select Monitor Stats. The system displays the following dialog box.
Figure 1. Monitor Interface Stats

This window displays statistics for up to four selected interfaces and provides a line graph (sparkline) that indicates changes in packet rate or bandwidth utilization. The auto-refresh rate for these statistics is ten seconds—mouse over the sparkline to view the range of values represented. To clear statistics for an interface, click the Menu control and select Clear Stats.

To view statistics for multiple interfaces, enable the checkbox to the left of the Menu control for each interface, click the Menu control in table, and select Monitor Selected Stats.
Figure 2. Monitoring > Interfaces

To view the interfaces assigned a specific role, use the Monitoring > Interfaces command and select the Filter, Delivery, or Service sub-option from the menu.

Viewing Oversubscription Statistics

To view peak bit rate statistics used to monitor bandwidth utilization due to oversubscription, click the Menu in the Interfaces table, select Show/Hide Columns, and enable the Peak Bit Rate checkbox on the dialog box that appears.

After enabling the Peak Bit Rate column, a column appears in the Interfaces table that indicates the relative bandwidth utilization of each interface. When using less than 50% of the bandwidth, the bar appears in green; 50-75% changes the bar to yellow, and over 75% switches the bar color to red.

To display statistics for a specific interface, select Monitor Stats from the Menu control to the left of the row.

To reset the statistics counters, select Clear Stats from the Menu control.

Note: DANZ Monitoring Fabric (DMF) Controllers generate SNMP traps for link saturation and packet loss. For more information, please refer to the DMF 8.6 Deployment Guide - SNMP Trap Generation for Packet Drops and Link Saturation chapter.

Using the CLI to Monitor Interface Configuration

To display the currently configured interfaces, enter the show interface-names command, as shown in the following example.
Ctrl-2> show interface-names

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Filter Interface(s)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# DMF IFSwitchIF Name Dir State SpeedVLAN Tag Analytics Ip address Connected Device
-|-----------|-------------------|---------|---|-----|------|--------|---------|----------|----------------|
1 Lab-traffic Arista-7050SX3-T3X5 ethernet7 rxup10Gbps 0True

~ Delivery Interface(s) ~
None.


~ Service Interface(s) ~
None.
 
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Recorder Fabric Interface(s)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# DMF IFSwitchIF NameDir State SpeedConnected Device
-|---------------|-------------------|----------|-------------|-----|------|------------------|
1 PR-NewHW-Intf Arista-7050SX3-T3X5 ethernet25 bidirectional up25Gbps PR-NewHW ens1f0
2 RMA-CNrail-intf Arista-7050SX3-T3X5 ethernet35 bidirectional up25Gbps RMA-CNrail ens1f0
Note: The name is used when configuring a policy.
To display a summary of the current DANZ Monitoring Fabric (DMF) configuration, enter the show fabric command, as in the following example.
controller-1# show fabric
~~~~~~~~~~~~~~~~~~~~~ Aggregate Network State ~~~~~~~~~~~~~~~~~~~~~
Number of switches : 3
Inport masking : False
Start time : 2018-03-16 15:42:43.322000 PDT
Number of unmanaged services : 0
Filter efficiency : 0:1
Number of switches with service interfaces : 0
Total delivery traffic (bps) : 168bps
Number of managed service instances : 2
Number of service interfaces : 0
Match mode : l3-l4-offset-match
Number of delivery interfaces : 6
Max pre-service BW (bps) : 20Gbps
Auto VLAN mode : push-per-policy
Number of switches with delivery interfaces : 2
Number of managed devices : 1
Uptime : 5 hours, 4 minutes
Total ingress traffic (bps) : 160bps
Max filter BW (bps) : 221Gbps
Auto Delivery Interface Strip VLAN : True
Number of core interfaces : 12
Overlap : True
Number of switches with filter interfaces : 2
State : Enabled
Max delivery BW (bps) : 231Gbps
Total pre-service traffic (bps) : 200bps
Track hosts : True
Number of filter interfaces : 5
Number of active policies : 2
Number of policies : 5
~~~~~~~~~~~~~ Aggregate Interface Statistics ~~~~~~~~~~~~~
# Interface Type Dir Packets BytesPkt Rate Bit Rate
-|------------------|---|-------|------|--------|--------|
1 Filter Interface rx2444455611 0160bps
2 Delivery Interface tx4050421227 0168bps
---------------------example truncated--------------------
controller-1#

Viewing Devices Connected to the Monitoring Fabric

 

Using the GUI to View Fabric-Connected Devices

To view a display of the devices connected to the Controller, select Fabric > Connected Devices from the main menu. The system displays the following screen.
Figure 3. Connected Devices

The Switch Interfaces table displays the unique devices connected to each out-of-band filter or delivery switch.It lists each interface's MAC address (Chassis ID) on every device connected to the fabric as a separate device.

The Unique Device Names table lists all unique device names with a count of interfaces in parentheses. Clicking a row in this list filters the contents of the Switch Interfaces table.

To view a display of the devices discovered by the Controller through the Link Aggregation Control Protocol (LACP), select Fabric > Connected LACP from the main menu.

The system displays the following screen.
Figure 4. Connected LACP

This page displays the devices discovered by the Controller through LACP.

Using the CLI to View Switch Configuration

To verify the switch interface configuration, enter the show topology command, as shown in the following example.
controller> show topology
~~~~~~~~~~~~~~~~~~~~~ Filter Interface(s) ~~~~~~~~~~~~~~~~~~~~~
# DMF IFSwitch IF Name state speed Connected Device
-|---------|-----------|--------|-----|-------|----------------|
1 f1filter-sw-1 s11-eth1 up10 Gbps
2 f2filter-sw-1 s11-eth2 up10 Gbps
~~~~~~~~~~~~~~~~~~~~ Delivery Interface(s) ~~~~~~~~~~~~~~~~~~~~
# DMF IFSwitch IF Name state speed Connected Device
-|---------|-----------|--------|-----|-------|----------------|
1 d1filter-sw-2 s12-eth1 up10 Gbps
2 d2filter-sw-2 s12-eth2 up10 Gbps
~~~~~~~~~~~~~~~~~~~~~~~~~ Service Interface(s) ~~~~~~~~~~~~~~~~~~~~~~~~~
# DMF IF Switch IF NameDir state speed Connected Device
-|----------------|---------|-------|---|-----|-------|----------------|
1 post-serv-intf-1 core-sw-1 s9-eth2 up10 Gbps
2 pre-serv-intf-1core-sw-1 s9-eth1 up10 Gbps
~~~~~~~~~~~~~~~~~~~~~~~~ Core Interface(s) ~~~~~~~~~~~~~~~~~~~~~~~~
# Src SwitchSrc IF Src Speed Dst SwitchDst IF Dst Speed
-|-------------|--------|---------|-------------|--------|---------|
1 core-sw-3 s13-eth2 10 Gbps delivery-sw-2 s15-eth3 10 Gbps
2 core-sw-3 s13-eth1 10 Gbps delivery-sw-1 s14-eth3 10 Gbps
3 filter-sw-1 s11-eth3 10 Gbps core-sw-2 s10-eth1 10 Gbps
4 core-sw-2 s10-eth1 10 Gbps filter-sw-1 s11-eth3 10 Gbps
5 delivery-sw-2 s15-eth3 10 Gbps core-sw-3 s13-eth2 10 Gbps
6 core-sw-2 s10-eth2 10 Gbps filter-sw-2 s12-eth3 10 Gbps
7 filter-sw-2 s12-eth3 10 Gbps core-sw-2 s10-eth2 10 Gbps
8 delivery-sw-1 s14-eth3 10 Gbps core-sw-3 s13-eth1 10 Gbps
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Statistics ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#DMF IFSwitch IF Role State PacketsBytesPkt Rate Bit Rate
--|---------|-------------|--------|--------|-----|-------|------|--------|--------|
1 f1 filter-sw-1 s11-eth1 filter up0 00-
2 f2 filter-sw-1 s11-eth2 filter up0 00-
3 d1 filter-sw-2 s12-eth1 delivery up8 6000-
4 d2 filter-sw-2 s12-eth2 delivery up8 600032 bps
6 -core-sw-3 s13-eth1 core up3432257400 032 bps
7 -delivery-sw-2 s15-eth3 core up3431257325 032 bps
8 -delivery-sw-1 s14-eth3 core up3430257250 032 bps
9 -core-sw-2 s10-eth1 core up3429257175 032 bps
10 - filter-sw-1 s11-eth3 core up3431257325 032 bps
11 - core-sw-3 s13-eth2 core up3432257400 032 bps
12 - filter-sw-2 s12-eth3 core up3429257175 032 bps

Viewing Information about a Connected Production Network

Once the monitoring fabric is set up and connected to packet feeds from the production network, DANZ Monitoring Fabric (DMF) starts to gather information about the production network. By default, DMF provides a view of all hosts in the production network visible from the filter interfaces. View this information on the GUI page under Monitoring > Host Tracker. As shown below, the output displays the host MAC address, IP address, when and on which filter interface traffic from the host was seen, and DHCP lease information. To display this information, enter the show tracked-hosts command, as shown in the following example.
# show tracked-hosts
# IP AddressMAC AddressHost name Filter interfacesVLANs Last seen Extra info
---|---------------|--------------------------------------------------------|-----------------------------------|------------------------|-------|---------|
1 10.0.0.340:a6:d9:7c:9f:9fApple wireless-poe-1 0 1 hours
2 10.0.0.698:fe:94:1c:37:06Apple wireless-poe-1 0 42 min
3 10.0.0.6dc:2b:61:81:64:45Apple wireless-poe-1 0 3 hours
4 10.0.0.720:c9:d0:48:f3:3dApple wireless-poe-1 0 2 hours
5 10.0.0.11 60:03:08:9b:4f:48Apple wireless-poe-1 0 13 min
6 10.0.1.314:10:9f:e4:e6:bfApple wireless-poe-1 0 51 min
--------------------------------------------------------------------output truncated----------------------------------------------------------------------

DMF also tracks the DNS names of hosts by capturing and analyzing packets using several different protocols. To manage host-name tracking, from config-analytics mode, use the track command, which has the following syntax:

[no] track { arp | dns | dhcp | icmp }

For example, the following command enables tracking using DNS:
controller-1(config)# analytics
controller-1(config-analytics)# track dns
Note:DNS traffic will not be included in DMF policies when enabling DNS for tracking.
Exclude host tracking for a specific filter interface using the no-analytics option with the role command.
controller-1(config)# switch DMF-FILTER-SWITCH-1
controller-1(config-switch)# interface ethernet20
controller-1(config-switch-if)# role filter interface-name TAP-PORT-01 no-analytics

This command disables all host tracking on interface TAP-PORT-01.

Using the CLI to View Connected Devices and LAGs

Some information on devices in the production network, discovered using LLDP and CDP, can be seen using the show connected-devices command. The data helps determine if filter interfaces are connected to the intended production device.

The show connected-devicescommand from login mode displays the devices connected to the DANZ Monitoring Fabric (DMF). This command displays information about devices connected to DMF switch interfaces. DMF extracts the information from link-level protocol packets such as LLDP, CDP, and UDLD and ignores expired link-level data.

Users can see the most recent events related to particular connected devices via the CLI command show connected-devices history device_alias.

Connecting a DMF switch interface to a SPAN port may result in inaccurate information because some vendor devices mirror link-level packets to the SPAN port.

To display details about the link aggregation groups connected to the DMF switch interfaces use the show connected-lacp command. DMF extracts the information from LACP protocol packets and expired LACP information is ignored as illustrated in the following.

Managing DMF Policies

This chapter describes the policies to work and configure in the DANZ Monitoring Fabric (DMF).

Overview

A policy selects the traffic to be copied from a production network to one or more tools for analysis. To define a policy, identify the traffic source(s) (filter interfaces), the match rules to select the type of traffic, and the destination tool(s) (delivery interfaces). The DANZ Monitoring Fabric (DMF) Controller automatically forwards the selected traffic based on the fabric topology. Define match rules to select interesting traffic for forwarding to the tools connected to the specified delivery interfaces. Users can also send traffic to be processed by a managed service, such as time stamping, slicing, or deduplication, on a DMF service node. Forward the output from the service node to the appropriate tool for analysis.

While policies can be simple, they can also be more complicated when optimizing hardware resources, such as switching TCAM space. Also, DMF provides different switching modes to optimize policies based on use cases and switch capabilities. Arista Networks recommends planning the switching mode before configuring policies in a production deployment.

For further information, refer to the chapter Advanced Policy Configuration.

DMF Policies Page

Overview

While retaining all information from the previous version, the new policy page features a new layout and design and offers additional functionality for easier viewing, monitoring, and troubleshooting of policies.

Figure 1. DMF Policies

Header Action Items

  • Refresh Button

Figure 2. Refresh Button

The page refreshes every 60 seconds automatically. Click the Refresh button to manually refresh the page.

  • Create Policy Button

Figure 3. Create Policy Button

Click the + Create Policy button to open the policy creation page.

  • Clear Stats Button

Figure 4. Clear Stats Button

Click the Clear Stats button to clear all DMF interface's runtime stats.

Quick Filters

  • Show Quick Filters Button

Figure 5. Show Quick Filters

By default, the feature is toggled on and displays four quick filter options. When toggled off, the four quick filters are no longer displayed.

Figure 6. Four Filter Options

Four quick filter cards display the policy counts that meet the filter criteria and the filter name. The quick filter cards support multi-select.

  • Radio Buttons

Figure 7. Table View / Interface View

Switch page views between Table View and Interface View. Refer to the Table View and Interface View sections below for more information.

Table View

The table view is the default landing view of the Policies Page.

The page displays an empty table with the Create Policy button when no configured policies exist.

Figure 8. DMF Policies

Conversely, when configured policies exist, the table view displays the list of policies.

Figure 9. List of Policies

Action Buttons

Several buttons in the policy table provide quick access to corresponding functionality. These are:

Figure 10. Action Buttons

Delete Button

  • Disabled by default (when no policies are selected).

  • Enabled when one or more policies are selected.

  • Used to delete selected policies.

Edit Button

  • Disabled by default (when no policies are selected).

  • Enabled only when a policy is selected.

  • Navigate to the editing workflow (the new policy edit workflow).

 

Duplicate Button

  • Disabled by default (when no policy is elected).

  • Enabled only when one policy is selected.

  • Navigate to the create policy workflow (the new policy create workflow) with an empty name input field while retaining the same settings from the selected policy.

Table View Filters

Figure 11. Filter Views

Click the Filter button to open the filter menu.

Policy Filter(s)

  • There are four quick policy filters. The first three filters overlap with the quick filters; thus, enabling or disabling them will trigger changes to the quick filter button.

DMF Interface Name(s)

  • Filters out policies by DMF interfaces that are selected from the drop-down list.

  • Searchable

  • Allows multiple selections applying OR logic.

Policies Table

Figure 12. Policy Table

The Policy table displays all policies; each column shows the number of interfaces and services corresponding to that policy.

Figure 13. Search

Table Search

The Policy table supports search functionality. Click the magnifying glass icon in the last column to activate the search input fields and search the results by the context of each column.

Table Search: The Policy table supports search functionality. Click on the magnifying glass icon in the last column to activate the search input fields. Search results by the context of each column.

Figure 14. Table Search
Figure 15. Expand Icon

Expand Policy + Icon

Hidden for an unconfigured policy. Click the expand + icon to view the policy's interfaces and services information.

Figure 16. Expanded View Example
Figure 17. Expand Group

Interfaces Group Expand + Icon

For policies configured with an interface group, an expand + icon with group displays by default. Click on the group expand + icon to view the detailed information on the interfaces belonging to that group.

Figure 18. Filter Interface Details

Policy Name Tooltip

Hovering over policy names displays the tooltips for the policy, including Configuration / Runtime / Details state.

Figure 19. Tooltip

Policy Error Icon

Figure 20. Policy Error Icon

Policies with errors will display this icon after the policy name.

Figure 21. Error with Policy Name

Clicking the error icon will display an error window with detailed information.

Figure 22. Detailed Error Information

Checkbox

Figure 23. Checkbox

Disabled for unconfigured policies. Use the checkbox to select a policy and the applicable function buttons (described above) as required.

Table Interaction

  • All columns support sorting.

  • Clicking on a policy name opens the policy table split view. The table on the left displays the policy names. The table on the right provides two tabs showing Configuration and Operational Details.

  • Select a policy name from the Policy Name list to view its configuration or operational details in the split view.

  • Use the icon to view the information in full-screen mode or the X icon to close the split view and return to the table view.
Figure 24. DMF Policies

Configuration Details

Access the Configuration Details tab by selecting a policy in either Table View or Interface View. This tab displays all of the configured settings for the selected policy.

The top row of the Configuration Details tab displays the selected policy name and an Edit and Delete button. The Edit button opens the Edit Policy configuration page with policy information prefilled, and the Delete button opens a confirmation dialog window before deleting a policy. The default Table View opens after deleting a policy.

Figure 25. Configuration Details

The second component of the Configuration Details is the Quick Facts box. This component displays the Description, Action, Push VLAN, Priority, Active, Scheduling Start Time, Policy Run Duration, PTP Timestamping, and Root Switch values.

  • Description: An info icon shows the entire description in a tooltip.

  • Action: Forward, Drop, Capture, or None.

  • Active: Policy active status, Yes or No.

    Scheduling Start Time: Either Automatically or the DateTime it is scheduled to start, in terms of the current Time Zone configured on the DMF. When setting DateTime to Now during policy creation, the time of creation will be the Scheduling start time.

    • Automatic: The policy will always run. There's no expiration.

    • Now: The policy starts from now, and duration and packet expiration may apply. The policy runs from now with no expiration.

      Figure 26. Start Time
  • Run Policy: The duration the policy should run. The default value is Always. Set a time limit (i.e., 4 hours) or a packet limit (i.e., 1,000 packets) The tooltip explains that the policy will stop running when reaching either of the limits.

The third component is the Rules Table, which displays all Match Traffic rules configured for the policy. The default value is Allow All Traffic. Optionally, configure Deny All Traffic.

Figure 27. Allow All Traffic
Figure 28. Deny All Traffic

When configuring custom rules, the Rules Table is displayed. The table is horizontally scrollable, and each column is searchable and sortable. The Edit Policy feature provides rule management, including Edit, Add, and Delete functionality.

Figure 29. Rules Table

The next component is the Interface Info Columns.

Figure 30. Information Columns

There are three primary columns: Traffic Sources, Services, and Destination Tools.

  • The Traffic Sources column includes Filter Interfaces, vCenters, and CloudVision Portal associated with the policy.

  • The Services column includes Managed Services and Services associated with the policy.

  • The Destination Tools column includes Delivery interfaces and RN Fabric Interfaces associated with the policy.

These columns display the DMF Interface name in the interface card, and the name includes a link to the Interfaces page. The switch name and physical interface name appear in this format: SWITCH-NAME / INTERFACE-NAME under the DMF interface name. The bit rate and packet rate operational state data appear for each interface. Each column is only displayed if the policy has one or more interfaces of that type.

Figure 31. Traffic Sources

The services column renders for all policies that have at least one service. The service name appears for each card, which contains a link to either the Services or Managed Services page. Under the service name, the service type (Managed Service or Service) appears if the service has a backup name that also appears.

Figure 32. Managed Service

There is a special case for policies that have CloudVision port mirroring sessions. To differentiate the CloudVision source interfaces from the auto-generated DMF filter interfaces, DMF creates two columns: CloudVision and Filter Interfaces.

The cards in the CloudVision column show the connected CloudVision portal and the number of port mirroring sessions for each device in the CloudVision portal. Filter Interfaces and vCenters are now in the Filter Interfaces column. There are no differences between the Services and Destination Tools columns.

The last component only displays for policies with CloudVision port mirroring sessions.

Figure 33. Port Mirroring Sessions

The Port Mirroring Session Entries table shows all configured Port Mirroring Sessions for a CloudVision portal. The Device, Source Interface, Monitor Type, Tunnel Source, Tunnel Endpoint, SPAN Interface, and Direction columns display the same values configured in the Port Mirroring Table in the Add Traffic Sources component in the Create Policy flow. Each column is sortable.

For more information on the configuration flow for CloudVision port mirroring, please refer to the documentation in the Create Policy section.

Operational Details

Clicking on the Operational Details Tab navigates to the Operational Details view.

Figure 34. Operational Details
Figure 35. Action Buttons
  • Edit: Clicking the Edit button opens the Editing Policy window for making changes to the policy.

  • Delete: Clicking the Delete button deletes the policy.

  • Edit Layout: Clicking the Edit Layout button opens the editing layout window. Move the widgets by dragging the components in order of user preference. Click the Save button to save the changes. DMF preserves the order of the widgets when the same user logs back in.

Figure 36. Edit Layout

Widgets

Status / Information

Status and information include basic operational information about the policy.

Figure 37. Operational Information

Installed Duration

Hover over the info icon to see the installed time in the UTC time zone.

Figure 38. Install Time

Top Filter and Delivery Interfaces by Traffic

Figure 39. Top Filter and Delivery Interfaces by Traffic
Figure 40. Select Metric

Click the Metric Drop-down menu and choose the metrics to display in the chart. Only the selected metrics appear in the Badge, Labels, and Bar Chart.

  • Badge: Colored dots and text indicate the content represented by different bars in the bar chart.

  • Interface Name

Figure 41. Labels

Hover over the interface name to see the full name in the tooltips.

  • Labels: Display the number and unit corresponding to the bar.
  • Bar Chart: Displays the numerical value of traffic.
  • Empty State
    • Display title, last updated time, and disabled metric drop-down.
    • The Edit Policy button opens the edit policy window.

Top Core Interfaces by Traffic

Figure 42. Top Core Interfaces by Traffic

The Core Interfaces by Traffic chart is similar to Filter Interfaces / Delivery Interfaces by Traffic charts, which have Metric Drop-down, Badge, Interface Name, Labels, and Bar Charts with similar functionality.

Errors & Dropped Packets

Figure 43. Errors

The Errors chart is similar to Filter Interfaces / Delivery Interfaces by Traffic charts, which have Metric Drop-down, Badge, Interface Name, Labels, and Bar Charts with similar functionality. Hovering over the bar displays all error counts and rate information.

Figure 44. Packets Dropped

The Dropped Packets chart is similar to Filter Interfaces / Delivery Interfaces by Traffic charts, which have Badge, Interface Name, Labels, and Bar Charts with similar functionality. Hovering over the bar displays all packet dropped counts and rate information.

Optimized Matches

Displays optimized match rules.

Figure 45. Optimized Matches

Interface View

As a new feature of the DMF Policies page, the Interface view offers an alternative way to view policies, allowing for an intuitive visualization of all policies-related interfaces.

Figure 46. Interface View

Policies Column

Figure 47. Policies Column
  • A Policies header displaying count. The column shows the total count when no filters are applied, or the filtered policies count in the format of x Associated.
  • The drop-down menu enables data sorting using multiple attributes.
  • The Delete button deletes the selected policies.
  • The Edit button opens the selected policy in edit mode.
  • The Filter drop-down is similar to the table view filters but without an interface filtering option.
    Figure 48. Filter
  • A list of policies with quick facts and user interactions.
  • The checkbox enables policy selection for deletion and editing.
  • Badges with different colors indicate policy run time status.
  • Policy name with tooltip on hover displaying configuration, runtime, and detailed status.
  • Current Traffic display in bps.
  • Clicking the View Information button highlights the policy:
    • Only shows the interfaces associated with the selected policy in the DMF Interfaces tab.
    • Enable Configuration Details and Operational Details.
  • Clicking on an active policy card deselects the previously selected policy:
    • De-emphasizes the policy and resets card styles and tabs accessibility.
    • Reveals all the interfaces in DMF Interfaces.
    • Interface card highlights in the DMF Interfaces tab can co-exist, leading to a more granular search.

DMF Interfaces

Figure 49. DMF Interfaces
  • Active tab by default
  • Header Row
    • Stat selector: Choose between Utilization, Bit Rate, and Packet Rate to display in the subsequent interface info cards.
    • Sorter selector: Choose between Utilization and interface name to sort the interfaces in ascending or descending order.
    • Filter drop-down:
      • Utilization range filter
      • Switch name selector
      • DMF interface name selector
  • Interface Column
    • Header: Specifies interface category and count, showing X Associated when filters apply and X Total otherwise.
    • Interface Information Card
      • Interface name
      • Stat
        • Utilization
        • Bit Rate
        • Packet Rate
      • Text: Display detailed information about the selected stat of the current interface.

Interaction

  • Selecting one policy card:

    The selected policy card highlights and filters interfaces to only those configured to the policy and hides interfaces not configured in the selected policy.

Figure 50. Policy Card
  • Selecting one interface card:

    The selected interface card highlights and filters policies to only those configured to the interface and hides interfaces not configured in the filtered policies mentioned above.

Figure 51. Single Interface Card
  • Selecting multiple interface cards (any columns):

    The selected interface cards highlight and filter policies to only those configured on the selected interfaces and hide interfaces not configured in the filtered policies mentioned above.

Figure 52. Multiple Interface Cards

Highlighted policy and interface cards can co-exist, leading to a more granular search.

Figure 53. Policy and Interface Cards

Configuration Details

The GUI is similar to Table View > Configuration Details. Please refer to the Configuration Details section.

Operational Details

The GUI is similar to Table View > Operational Details. Please refer to the Operational Details section.

Policy Elements

Each policy includes the following configuration elements:
  • Filter interfaces: these identify the ingress ports for analyzing the traffic for this policy. Choose individual filter interfaces or one or more filter interface groups. Select the Select All Filter Interfaces option, intended for small-scale deployments.
  • Delivery interfaces: these identify the egress ports for analyzing the traffic as part of this policy. Choose individual delivery interfaces or one or more delivery interface groups. Like filter interfaces, a Select All Delivery Interfaces option is available for small deployments.
  • Action: identifies the policy action applied to the inbound traffic. The following actions are available:
    • Forward: forwards matching traffic at filter ports to the delivery ports defined in a given policy. Select at least one or more filter and delivery interfaces.
    • Drop: drops matched traffic at the Filter ports. A policy with a drop action is often used in combination with another lower-priority policy to forward all traffic except the dropped traffic to tools. Use Drop to measure the bandwidth of matching traffic without forwarding it to a tool. Select at least one or more filter interfaces.
    • Capture: sends the selected traffic to a physical interface on the controller to be saved in a PCAP file. This option works only on a hardware Controller appliance. Select at least one or more filter interfaces. A policy with a capture action can only run for a short period. For continuous packet capture, use the DANZ Monitoring Fabric (DMF) recorder node. Refer to the chapter Using the DMF Recorder Node for details.
      Note:The policy will not be installed if an action is not selected.
    • Match rules: used to select traffic. The selected traffic is treated based on the action, with the most common action being Forward, i.e., forward-matched traffic to delivery interfaces. If a match rule is not specified or the match rule is Deny All Traffic, the policy is not installed. One policy can specify multiple match rules, differentiating each rule by its rule number.
      Note: The rule numbers do not define the order in which the rules will be installed or processed. The numbering allows a user to list them in order.
    • Managed services (optional): identifies additional operations to perform, such as packet slicing, time stamping, packet deduplication, packet obfuscation, etc., before sending the traffic to the selected delivery interfaces.
    • Status (optional): enables or disables the policy using the active or inactive sub-command from the config-policy sub-model. By default, a policy is active when initially configured.
    • Priority (optional): unless a user specifies, all policies have a priority of 100. When sharing filter/ingress ports across policies, a policy with a higher priority will get access to matching traffic first. Traffic not matched by the policies with the higher priority then gets processed according to policies with lower priority. Overlapping policies are also not created when two policies have different priorities defined.
    • Push VLAN (optional): when a user configures the Auto VLAN Mode push as push-per-policy (i.e., to Push Unique VLAN on Policies, every policy configured on DMF gets a unique VLAN ID. Typically, this VLAN ID is in the range of 1-4094 and auto-increments by 1. However, to specific policy with a specific VLAN ID, first define a smaller VLAN range using the command auto-vlan-range and then pick a VLAN outside that range to attach to a specific policy. This attachment of a specific VLAN to a specific policy can be done in the CLI using the CLI command push-vlan or in the GUI by selecting Push VLAN from the Advanced Options drop-down and then specifying the VLAN ID.
    • Root switch (optional): when a core switch (or core link) goes down, existing policies using that switch are rerouted using other core switches. When that switch comes back, the policy does not move back. In some cases, this causes traffic overload. One way to overcome this problem is to specify a root switch in each policy. The policy is rerouted through other switches when the root switch goes down. When the root switch comes back, DMF reroutes the policy through the root switch again.

Policies can include multiple filter and delivery interfaces, and services are optional. Traffic that matches the rules in any policy affiliated with a filter interface forwards to all the delivery interfaces defined in the policy.

Except for a capture action policy, a policy runs indefinitely once activated. Optionally schedule the policy by specifying a starting time and period for which the policy should run and specify the number of received packets in the tool, after which the policy automatically deactivates.
Note:
  1. Create and configure all interfaces and service definitions before creating a policy that uses them.
  2. Use only existing interfaces and service definitions when creating a policy. When creating a policy with interfaces or service definitions that do not exist, the policy may enter an inconsistent state.
  3. If this happens, delete the policy, create the interfaces and service definitions, and then recreate the policy.

Configuring a Policy

Configure a Policy Using the GUI

DANZ Monitoring Fabric (DMF) introduces a newly designed Create Policy configuration workflow, replacing the former workflow page.

There are two possible entry points for creating a policy. The first is via the Create Policy button continuously displayed on the top-right corner of the DMF Policies page, or the second is via the Create Policy button, which appears on the central panel of the same page when no configured policies exist.

Figure 54. DMF Policies

Clicking the Create Policy button opens the new Policy Creation configuration page, which supports moving, minimizing, expanding, collapsing, and closing the window using the respective icons in the menu bar.

Figure 55. Create Policy
Figure 56. UI Controls

Move: Click (and hold) any part of the title section of the window or the icon to drag and reposition as required. Moving the window in full-size mode is not possible.

Expand: Click the icon to enlarge the window.

Minimize: Click the icon to minimize the window and the icon to return to the standard view.

Proceed to the following sections for create and manage policies.

Create a New Policy

Create a New Policy

To create a new Policy, complete the required fields in the Policy Details section and configure settings under the Port Selection tab (optional) and the Match Traffic tab (optional). Please refer to the Policy Details, Port Selection Tab, and Match Traffic Tab sections for more detailed information on configuring settings.

Once configured, click the Create Policy button on the bottom-right corner to save the changes and finish the policy creation.

Figure 57. Create Policy
Policy Details
Figure 58. Policy Details

Enter the primary information for the policy:

  • Policy Name (must be unique)
  • Description
  • Policy Action: Capture, Drop, Forward (default)
    Note: The Destination Tools column is not available when Drop and Capture actions are selected.
  • Push VLAN
  • Priority: By default, set to 100 if no value is specified.
  • Active: By default, set to enabled.
  • Advanced Options: By default, disabled.

When Advanced Options is enabled, the following configuration settings are available:

Figure 59. Advanced Options
  • Scheduling: There are four options:
    • Automatic: The policy runs indefinitely.
    • Now: The policy starts running immediately; use Run Time to determine when the policy should stop.
    • Set Time: Set a specific date and time to start the policy.
      Figure 60. Scheduling
    • Set Delay: Start the policy using relative time options.
      Figure 61. Set Delay
  • Run Time: There are two options:
    • Always: (default).
    • For Duration: Selecting For Duration allows using Time Input to set the time number and the Unit selector to set the time unit. Select the checkbox to use Packet Input and enter the required packet number (1000, by default).
Figure 62. Run Time
  • PTP Timestamping: Disabled by default.
  • Root Switch: By default, set to a locked state. Click the lock icon to unlock and select a root switch.

Additional Controls

Figure 63. Collapse
Figure 64. Show
  • Collapse and Show: Visually hide or unhide the basic policy configurations to manage the view of the other configuration fields.

Traffic Sources

The Traffic Sources column displays the associated traffic sources in the policy.

Figure 65. Traffic Sources
To add Sources, click on the Add Port(s) button. The page allows adding Filter Interfaces and Groups, vCenters, or CloudVision Portals.
Note: The left column has three multiple groups. Select the corresponding type of traffic source to view the available selections. After making all desired selections, confirm them using the Add N Sources button.
Figure 66. Add Sources

Interfaces can be searched by the available information in the interface tiles using the search bar. Clicking the icon reveals sorting and filtering options using Display Data, which includes:

  • Sort - By default, DMF sorts the data in descending Bit Rate order. Optionally, sort the data by ascending Bit Rate order or alphabetically.
  • Bit Rate (default), Utilization percentage, or Packet Rate
  • Switch Name
  • Interface Name(s)

 

Figure 67. Traffic Sources Display Data

DMF sorts vCenters and CloudVision Portals alphabetically (A-Z, by default).

Figure 68. Sort Traffic Sources

Suppose a Filter Interface has not been created yet, the Create button has two selections: create Filter Interfaces and Filter Interface Groups.

Figure 69. Filter Interfaces / Filter Interface Groups

Clicking the Create Filter Interface button opens a form to configure a Filter Interface. Enter the required settings to configure the new Filter Interface.

Figure 70. Configure Filter Interface

Alternatively, the left column allows the selection of an existing connected device to pre-populate the Switch Name and Interface Name fields and to configure a Filter Interface based on a connected device. Click the Create and Select button to create the Filter Interface and associate it with the current policy.

Figure 71. Associate Filter Interface

To create multiple Filter Interface(s), click the Create another button to create an interface using the current configuration. This action clears the form to allow the creation of an additional Filter Interface.

Figure 72. Add Multiple Filter Interfaces
Note: The Select (n) interface button associates all created Filter Interfaces to the current policy.

Click the Create Filter Interface Group button to create a group of filter interfaces.

Figure 73. Create Filter Interface Group

Select one or more filter interfaces to create a Filter Interface Group.

Figure 74. Add Filter Interfaces

Click the Create Group button to create the Filter Interface Group and associate the group with the current policy.

Figure 75. Create Group

Expand the group tile to view interfaces within an Interface Group.

Figure 76. Expand Details
Note: Clicking the x icon on the top right of each tile disassociates the Filter Interface from the current policy. Clicking the Undo button restores the association.
Figure 77. Disassociate Filter Interface
CloudVision Portals

The Create Policy window lists CloudVision Portals connected to DMF and includes the CloudVision Portal name, the portal hostname, and the current software version. Select a card to add a CloudVision Port Mirroring Table. The card displays similar information and the default Tunnel Endpoint.

Figure 78. CloudVision Portals

An empty port mirroring table initializes to add rows to the table for configuring port mirroring sessions.

Use the following guidelines to configure a port mirroring session:

  • Each row must contain a Device and Source Interface. This interface in the CloudVision production network will mirror traffic to DMF.
  • Each interface must select a Monitor Type: GRE Tunnel or SPAN.
    Note: SPAN requires a physical connection from the CloudVision Portal to DMF. The default value for Tunnel Endpoint is the CloudVision Portal’s Default Tunnel Endpoint.
  • Each device must have the same Tunnel Endpoint and Tunnel Source values across the policies. Each interface on a device must have an identical destination configuration (GRE Tunnel, GRE Tunnel Source, and SPAN Interface) across the policies.
  • The default traffic direction is Bidirectional but configurable to Ingress or Egress.
  • After configuring the Port Mirroring Table, click Add Sources to return to the Main Page of the Create Policy configuration page.
Figure 79. Edit Policy

After configuring Port Mirroring, the card appears in the Traffic Sources section. To edit the Port Mirroring Table, click the X Entries link.

Services

The Services column displays the Services and Managed Services associated with the policy. The Add Service(s) button opens a new page to specify additional services.

Figure 80. Services Add Services

View All Services and View All Managed Services open the DMF Services and Managed Services pages, respectively. The Add Service button opens a configuration panel to specify Service information. If there are Services associated with this policy, they will be listed and available to edit.

Figure 81. View All Services / View All Managed Services

For each Service, specify:

  • Service Type: Managed or Unmanaged.
  • Service: Name of the Service (required).
  • Optional: Whether the Service is optional.
  • Backup Service: Name of the backup Service.
  • Del. Service: If the Managed Service type is selected, whether to use it as a Delivery Service.

Click the Add Another button to populate a new row to add another Service. The Add (n) Services button associates the Services with the policy.

Figure 82. Add Another Service

After adding the services, they appear in the Services column. Click the x icon on the Service tile to disassociate the Service from the policy. While remaining on the page, if required, re-associate the Service by clicking Undo.

Figure 83. Service Added

Destination Tools

The Destination Tools column displays the associated Destination Tool ports to a given policy.

Figure 84. Destination Tools

Use the Add Port(s) button to add more destinations. The configuration page allows adding Delivery Interfaces/Groups or Recorder Node Fabric Interfaces.

Note: The left column has two multiple groups. Select the corresponding type of Destination Tools to see the available selections. After making the desired selections, confirm using the Add (n) Interfaces button.
Figure 85. Add Interfaces

Interfaces can be searched by the available information in the interface tiles using the search bar. Clicking the icon reveals sorting and filtering options using Display Data, which includes:

Sort - By default, DMF sorts the data in descending Bit Rate order. Optionally, sort the data by ascending Bit Rate order or alphabetically.

Bit Rate (default), Utilization percentage, or Packet Rate

Switch Name

Interface Name(s)

Figure 86. Filter Destination Tools

Sort Recorder Node Fabric Interfaces alphabetically (A-Z, by default) and filter by Bit Rate.

Figure 87. Sort Destination Tools

Suppose there is still a need to create Destinations (Delivery Interfaces). In that case, the Create button has two selections: create Delivery Interfaces and Delivery Interface Groups.

Figure 88. Create Delivery Interfaces / Delivery Interface Groups

Clicking the Delivery Interface button opens a form to configure a Delivery Interface. Enter the required settings to configure the new Delivery Interface.

Figure 89. Configure Delivery Interface

Alternatively, the left column allows the selection of an existing connected device to pre-populate the Switch Name and Interface Name fields and to configure a Delivery Interface based on a connected device. Click the Create and Select button to create the Delivery Interface and associate it with the current policy.

Figure 90. Associate Delivery Interface

To create multiple Delivery Interface(s), click the Create another button to create an interface using the current configuration. This action clears the form to allow the creation of an additional Delivery Interface.

Figure 91. Multiple Delivery Interfaces

The Select (n) interface button associates all created Delivery Interfaces to the current policy.

Figure 92. Select Number of Interfaces & Associate

Click the Create Delivery Interface Group button to create a group of delivery interfaces.

Figure 93. Create Delivery Interface Group

Select one or more delivery interfaces to create a Delivery Interface Group.

Figure 94. Multiple Delivery Interfaces

Click the Create Group button to create the Delivery Interface Group and associate the group with the current policy.

Figure 95. Associate Delivery Interface Group

Expand the group tile to view interfaces within an Interface Group.

Figure 96. Expand Details

Stat Picker

Use the Stat: Packet Rate drop-down to select view specific data for the associated interfaces.

Figure 97. None

The data options are:

Utilization

Figure 98. Utilization

Bit Rate (default)

Figure 99. Bit Rate

Packet Rate

Figure 100. Packet Rate

Match Traffic and Match Traffic Rules

Match Traffic

Use the Match Traffic tab to configure rules for the current policy.

Figure 101. Match Traffic

There are four options to configure traffic rules.

Figure 102. Configuration Options

Select the Allow All Traffic or Deny All Traffic radio button to quickly configure a rule for all traffic.

Navigate to the Rule Details configuration panel using the Configure A Rule button. Refer to the Custom Rule, Match Rule Shortcut, and Match Rule Group sections for more information.

The Import Rules button opens the import rule configuration dialog and supports importing .txt files using drag and drop or Browse.

Example Text File

1 match ip
2 match tcp
3 match tcp src-port 80
4 match tcp dst-port 25
Figure 103. Import Rules

Click the Preview button to verify the import result.

Figure 104. Preview

While using the Preview Imported Rule table, click the Edit button to open the Edit Rule configuration panel.

Figure 105. Edit Rule

Click the Confirm button when finished, and use the Import x Rules button to import the rules.

Custom Rule

Click the Configure a Rule button to open the Configure A Traffic Rule screen.

Figure 106. Configure a Traffic rule

By default, the configuration method is Custom Rule with several fields disabled by default; hover over the question mark icon for more information on enabling an input field.

Figure 107. Help Icon

Specific EtherTypes will open an Additional Configurations panel.

Figure 108. Additional Configurations

Click the drop-down icon to display additional configurations (Source, Destination, Offset Match). Hovering over Offset Match allows viewing requirements to enable the Offset Match.

Figure 109. Offset Match
Match Rule Shortcut

To access the Match Rule Shortcut, click the drop-down button and select Match Rule Shortcut.

Figure 110. Match Rule Shortcut

Click the Select Rule Shortcut selector and choose the required shortcut rules (supports multi-selection).

Figure 111. Shortcut Rule List

After selecting the rule shortcut:

  • All selected rules appear as a card in the selector.
  • Delete selected rules using the x icons.
  • Click the Customize Shortcut button to edit a rule shortcut.
Figure 112. Edit Shortcut

After editing, click the Save Edit button to return to the Match Rule Shortcut view.

Figure 113. Save Edits

After configuring the shortcut rules, click the Add (n) Rules button to finish the configuration.

Match Rule Group

To access the Match Rule Group, click the drop-down button and select Match Rule Group.

Figure 114. Match Rule Group

To select a rule group, click the drop-down button under Rule Group. All rule groups appear in the menu. Select one. There is no multi-select available. Repeat the Match Rule Group steps to add more than one rule group.

Figure 115. Rule Group List

After configuring the rule group, click the Add Rule button to finish the configuration.

Figure 116. Add Rule

Rules Table

All configured rules appear in the Rules Table.

Figure 117. Rules Table
  • Import Rules
    Figure 118. Import Rules
    • Similar in function to the Import Rules button on the start page. Refer to Start Page -> Import Rules for more information.
  • Export Select Rules
    Figure 119. Export Select Rules
    • Disabled by default when no rule is selected.
    • Enabled when one or more than one rule is selected.
    • Click to export selected rules information as a .txt file.
  • Delete
    Figure 120. Delete
    • Disabled by default when no rule is selected.
    • Enabled when one or more than one rule is selected.
    • Click to delete the selected rules.
  • Create New Rule and Create Rule Group buttons
    Figure 121. Create New Rule / Create Rule Group
    • The button will appear as Create New Rule when no rule is selected. Click to open the Create New Rule screen.
    • When one or more rules are selected, the button changes to Create Rule Group. Click to open the Create Rule Group screen.
      Figure 122. Create Rule Group
The Rule Group Name is required. Click Create Group to confirm the rule group creation.
  • Table Actions
    Figure 123. Edit / Delete
    • Click the Edit button to edit the rule view.
    • Click the Delete button to delete the rule.
  • Table Search
    Figure 124. Table Search
    • The Rules Table supports search functionality. Click the magnifying glass icon to activate the search input fields and search the results by the context of each column.
  • Checkbox
    Figure 125. Checkbox
    • Check the box to select a rule and use the function buttons described above.

 

  • Expandable Group Rules
    • Group Rules in the Rule Table display as the group's name with an expand button.
      Figure 126. Expand
    • Click the expand button to see the rules included in the group.
      Figure 127. Expanded Column

Configure a Policy Using the CLI

Before configuring a policy, define the filter interfaces for use in the policy.

To configure a policy, log in to the DANZ Monitoring Fabric (DMF) console or SSH to the IP address assigned and perform the following steps:

  1. From config mode, enter the policy command to name the policy and enter the config-policy submode, as in the following example:
    controller-1(config)# policy POLICY1
    controller-1(config-policy)#

    This example creates the policy POLICY1 and enters the config-policy submode.

  2. Configure one or more match rules to identify the aggregated traffic from the filter interfaces assigned to the policy, as in the following example.
    controller-1(config-policy)# 10 match full ether-type ip dst-ip 10.0.0.50 255.255.255.255
    This matching rule (10) selects IP traffic with a destination address 10.0.0.50.
  3. Assign one or more filter interfaces, which are monitoring fabric edge ports connected to production network TAP or SPAN ports and defined using the interface command from the config-switch-if submode.
    controller-1(config-policy)# filter-interface TAP-PORT-1
    Note: Define the filter interfaces used before configuring the policy.
    To include all monitoring fabric interfaces assigned the filter role, use the all keyword, as in the following example:
    controller-1(config-policy)# filter-interface all
  4. Assign one or more delivery interfaces, which monitor fabric edge ports connected to destination tools and defined using the interface command from the config-switch-if submode.
    controller-1(config-policy)# delivery-interface TOOL-PORT-1
    Define the delivery interfaces used in the policy before configuring the policy. To include all monitoring fabric interfaces assigned the delivery role, use the all keyword, as in the following example:
    controller-1(config-policy)# delivery-interface all
  5. Define the action to take on matching traffic, as in the following example:
    controller-1(config-policy)# action forward
    • The forward action activates the policy so matching traffic immediately starts being forwarded to the delivery ports identified in the policy. The other actions are capture and drop.
    • A policy is active when the configuration of the policy is complete, and a valid path exists through the network from a minimum of one of the filter ports to at least one of the delivery ports.
    • When inserting a service in the policy, the policy can only become active and begin forwarding when at least one delivery port is reachable from all the post-service ports defined within the service.
    To verify the operational state of the policy enter the show policy command.
    controller-1# show policy GENERATE-IPFIX-NETWORK-TAP-1
    Policy Name : GENERATE-IPFIX-NETWORK-TAP-1
    Config Status : active - forward
    Runtime Status : installed
    Detailed Status : installed - installed to forward
    Priority : 100
    Overlap Priority : 0
    # of switches with filter interfaces : 1
    # of switches with delivery interfaces : 1
    # of switches with service interfaces : 0
    # of filter interfaces : 1
    # of delivery interfaces : 1
    # of core interfaces : 0
    # of services : 0
    # of pre service interfaces : 0
    # of post service interfaces : 0
    Push VLAN : 3
    Post Match Filter Traffic : -
    Total Delivery Rate : -
    Total Pre Service Rate : -
    Total Post Service Rate : -
    Overlapping Policies : none
    Component Policies : none
    ~ Match Rules ~
    # Rule
    -|-----------|
    1 1 match any
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Filter Interface(s) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    # DMF IF Switch IF NameState Dir Packets Bytes Pkt Rate Bit Rate Counter Reset Time
    -|-------------|---------------|----------|-----|---|---------|-----------|--------|--------|------------------------------|
    1 TAP-TRAFFIC-2 FILTER-SWITCH-1 ethernet16 uprx182876967 69995305364 0-2022-10-31 23:13:10.177000 PDT
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Delivery Interface(s) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    # DMF IF Switch IF NameState Dir Packets Bytes Pkt Rate Bit Rate Counter Reset Time
    -|-------------|---------------|----------|-----|---|---------|-----------|--------|--------|------------------------------|
    1 TAP-TRAFFIC-1 FILTER-SWITCH-1 ethernet15 uptx182876967 69995305364 0-2022-10-31 23:13:10.177000 PDT
    ~ Service Interface(s) ~
    None.
    ~ Core Interface(s) ~
    None.
    ~ Failed Path(s) ~
    None.
    controller-1#
Note: If two policies have the same filter and delivery interfaces and the same priority with similar match conditions, then incorrect statistics can result for one or both policies. To alleviate this issue, either increase the priority or change the match conditions in one of the policies.
Detailed status in show policy command shows detailed information about a policy status. If for any reason a policy fails, the detailed status shows why the policy failed. One cause of policy failure is the TCAM reaching its total capacity.When this happens, the detailed status shows a message like Table ing_flow2 is full <switch_DPID>.
  • ing_flow1- used for programming analytics tracking like DNS, DHCP, ICMP, TCP control packets, and ARP.
  • ing_flow2 is the TCAM table used for programming data forwarding.
  • To delete an existing policy, use the no policy command and identify the policy to delete, as in the following example:
    controller-1(config-policy)# no policy policy-name-1
    Warning: submode exited due to deleted object
  • When deleting a policy, DMF deletes all traffic rules associated with the policy.

Define Out-of-band Match Rules Using the CLI

A policy can contain multiple match rules, each assigned a rule number. However, the rule number does not specify a priority or the sequence in applying the match rule to traffic entering the filter ports included in a policy. Instead, if the traffic matches any match rules, all actions specified in the policy are applied to all matching traffic.

The following example adds two match rules to dmf-policy-1.
controller-1(config)# policy dmf-policy-1
controller-1(config-policy)# 10 match full ether-type ip dst-ip 10.0.0.50 255.255.255.255
controller-1(config-policy)# 20 match udp src-ip 10.0.1.1 255.255.255.0
controller-1(config-policy)# filter-interface filname2
controller-1(config-policy)# delivery-interface delname3
controller-1(config-policy)# action forward
Note: When changing an existing installed policy by adding or removing match rules, DANZ Monitoring Fabric (DMF) calculates the change in policy flows and only sends the difference to the switches in the path for that policy. The unmodified flows for that policy are not affected.

When more than one action applies to the same packet, DMF makes copies of the matched packet. For details, refer to the chapter Advanced Policy Configuration.

Stop, Start, and Schedule a Policy Using the CLI

Enter the active or inactive command from the config-policy submode to enable or disable a policy.

To stop an action that is currently active, enter the stop command from the config-policy submode for the policy, as in the following example:
controller-1(config)# policy policy1
controller-1(config-policy)# stop

By default, if the policy action is forward or drop, the policy is active unless it is manually stopped or disabled.

To start a stopped or inactive policy immediately, enter the start now command from the config-policy submode for the policy, as in the following example:
controller-1(config)# policy policy1
controller-1(config-policy)# start now

For a policy with the forward action, the start now command causes the policy to run indefinitely. However, policies with the capture action run capture for 1 minute unless otherwise specified, after which the policy becomes inactive. This action prevents a capture from running indefinitely and utilizes the appliance storage capacity.

Use the start command with other options to schedule a stopped or inactive policy. The full syntax for this command is as follows:

start { now [ duration duration ] [ delivery-count delivery-packet-count ] | automatic | on-date-time start-time[duration duration ] seconds-from-now start-time [ duration duration ] [ delivery-count delivery-packet-count ]

The following summarizes the usage of each keyword:
  • now: start the action immediately.
  • delivery-count: runs until the specified number of packets are delivered to all delivery interfaces.
  • seconds: start the action after waiting for the specified number of seconds. For example, 300+ start the action in 5 minutes.
  • date-time: starts the action on the specified date and time. Use the format %Y-%m-%dT%H:%M:%S.
  • duration: DANZ Monitoring Fabric (DMF) assigns 60 seconds by default if no duration is specified. A value of 0 causes the action to run until it is stopped manually. When using the delivery-count keyword with the capture action, the maximum duration is 900 seconds.
For example, to start a policy with the forward action immediately and run for five minutes, enter the following command:
controller-1(config-policy)# start now duration 300
The following example starts the action immediately and stops after matching 100 packets:
controller-1(config-policy)# start now delivery-count 100

The following example starts the action after waiting 300 seconds:

controller-1(config-policy)# start 300+

Clear a Policy Using the CLI

To remove a specific DANZ Monitoring Fabric (DMF) policy, use the no keyword before the policy command, as in the following example:
controller-1(config)# no policy sample_policy

This command removes the policy sample_policy.

To clear all policies at once, enter the following command:
controller-1(config)# clear-all-configured-policy

View Policies Using the CLI

To display the policies currently configured in the DANZ Monitoring Fabric (DMF) fabric, enter the show policy command, as in the following example:

This output provides the following information about each policy.
  • #: a numeric identifier assigned to the policy.
  • Policy Name: name of the policy.
  • Action: Forward, Capture, or Drop.
  • Runtime Status: a policy is active only when the policy configuration is complete, and a valid path exists through the network from a minimum of one of the filter ports to at least one of the delivery ports (and moves on through the service ports if that is specified). When inserting a service in the policy, the policy can only become active/forwarding when a delivery port is reachable from all the post-service ports of the service.
  • Type: configured or dynamic. Refer to the Configuring Overlapping Policies section for details about dynamic policies created automatically to support overlapping policies.
  • Priority: determines which policy is applied first.
  • Overlap Priority: the priority assigned to the dynamic policy applied when policies overlap.
  • Push VLAN: a feature that rewrites the outer VLAN tag for a matching packet.
  • Filter BW: bandwidth used.
  • Delivery BW: bandwidth used.

The following is the full command syntax for the show policy command:

show policy[ name [filter-interfaces | delivery-interfaces | services | core | optimized-match | failed-paths | drops | match-rules | optimized-match ]]

Use the event history to determine the last time when policy flows were installed or removed. A value of dynamic for Type indicates the policy was dynamically created for overlapping policies.

Rename a Policy Using the CLI

Policy Renaming Procedure
Note: A DANZ Monitoring Fabric (DMF) policy must exist to use the renaming feature.

Use the following procedure to rename an existing policy.

  1. Use the CLI command policy existing-policy-name to enter the submode of an existing policy and then enter the show this command.
    dmf-controller-1(config)# policy existing-policy-name
    dmf-controller-1(config-policy)# show this
    ! policy
    policy existing-policy-name
  2. Enter the rename command with the new policy name, as shown in the following example.
    dmf-controller-1(config-policy)# rename new-policy-name
    Note: Possible traffic loss may occur when renaming a policy.
  3. Verify the policy name change using the show this command.
    dmf-controller-1(config-policy)# show this
    ! policy
    policy new-policy-name
    dmf-controller-1(config-policy)#
Note: A user must have permission to update the policy. The new policy name must follow the requirements for a policy name.

Using the Packet Capture Action in a Policy

Capture packets into a PCAP file for later processing or analysis. DANZ Monitoring Fabric (DMF) stores the captured packets on the DMF Controller hardware appliance. This feature provides a quick look at a small amount of traffic. For continuous packet capture and storage, use the DMF Recorder Node, described in the chapter Using the DMF Recorder Node.
Note: Storing PCAP files is supported only with the hardware appliance, as running the Controller in a virtual machine is impossible. The DMF hardware appliance normally provides 200 GB of storage capacity, but the hardware appliance is optionally available with 1 TB of storage capacity.

To enable this feature, connect one of the DMF Controller hardware interfaces to a fabric switch interface defined as a DMF delivery interface.

Figure 128. DMF Controller Hardware Appliance
Table 1.
1 1G Management Port 2 10G Management Port
Figure 129. Capturing Packets on the DMF Appliance

To capture packets, define a policy with filter ports and match rules to select the interesting traffic. Specify the capture action in the policy, then schedule the policy for a duration or packet count. In the illustrated example, a service exists in the policy to modify the packets before capture, but this is optional.

By default, when the policy action is capture, the policy is only active after scheduling the policy. Packet captures are always saved on the master (active) Controller. In case of HA failover, previous packet captures remain on the Controller where they were initially saved.

By default, DMF automatically removes PCAP files after seven days. Change the default value using the following CLI command with the command option if preferred:
controller-1(config)# packet-capture retention-days <tab-key>
<retention-days> Configure packet capture file retention period in days. Default is 7 days
controller-1(config)#

Define a Policy with a Packet Capture Action Using the CLI

Use the packet-capture-retention-days command to change the number of days to retain PCAP files. To view the current setting, use the show packet-capture retention-days <retention-days> command.

To remove PCAP files immediately, use the delete packet-capture files command. Delete the files affiliated with a specific policy, as shown in the following example:
controller-1(config-policy)# delete packet-capture files policy capture file 2022-02-24-07-31-25-34d9a85a.pcapng
The following command assigns the capture action to the current policy and schedules the packet capture to start immediately and run for 60 seconds.
controller-1(config-policy)# action capture
controller-1(config-policy)# start now duration 60

For a policy with the forward action, the start now command causes the policy to run indefinitely. However, policies with the capture action run capture for 1 minute unless otherwise specified, after which the policy becomes inactive. This action prevents a capture from running indefinitely and utilizes the appliance storage capacity.

The following command starts the capture immediately and runs until it captures 1000 packets:
controller-1(config-policy)# start now delivery-count 1000
Once the packet capture is complete, the PCAP file can be downloaded via HTTP using the URL displayed when entering the show packet-capture files command, as shown in the following example.
controller-1(config-policy)# show packet-capture files
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ All Packet Capture Files ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Policy Name File NameFile Size Last ModifiedURL
-|-----------|----------------------------------|---------|------------------------------|----------------------------------------------------------------|
1 capture 2022-11-01-03-03-19-c106e6c.pcapng 258MB 2022-11-01 03:04:17.227000 PDT https://10.9.33.2/pcap/capture/2022-11-01-03-03-19-c106e6c.pcapng
controller-1(config-policy)#
To view the storage used and remaining for PCAP files, enter the show pcap-storage command, as in the following example:
controller-1 > show packet-capture disk-capacity
Disk capacity : 196GB
controller-1> show packet-capture disk-usage
Disk usage : 258MB
controller-1>
To view the number of days PCAP files are retained before deletion, use the show packet-capture retention-days command as in the following example:
controller-1> show packet-capture retention-days
To view the history of packet captures, enter the following command:
controller-1(config-policy)# show policy capture history
# Time Event Detail PCAP File
-|------------------------------|-------------------------------|-----------------|-------------------------------------------------|
1 2022-11-01 03:03:19.382000 PDT installation complete capturing packets/pcap/capture/2022-11-01-03-03-19-c106e6c.pcapng
2 2022-11-01 03:04:16.895000 PDT Configuration updated by admin. capturing packetsinactive - outside configured runtime/duration, 
scheduled to be started in 7sec if set active
3 2022-11-01 03:04:17.266000 PDT policy removed inactive - outside configured runtime/duration, 
scheduled to be started in 6sec if set active
controller-1(config-policy)#

Managing DMF Switches and Interfaces

This chapter describes the basic configuration required to deploy and manage DANZ Monitoring Fabric (DMF) switches and interfaces.

 

Overriding the Default Configuration for a Switch

By default, each switch inherits its configuration from the DANZ Monitoring Fabric (DMF) Controller. Use the following pages of the Configure Switch dialog to override the following configuration options for a specific switch.
  • Info
  • Clock
  • SNMP
  • SNMP traps
  • Logging
  • TACACS
  • sFlow®*
  • LAG enhanced hash

CLI Configuration

To use the CLI to manage switch configuration, enter the following commands to enter the config-switch submode.
controller-1(config)# switch <switch-name>
Replace the switch-name with the alias previously assigned to each switch during installation, as in the following example.
controller-1(config)# switch DMF-SWITCH-1
controller-1(config-switch)#

From this submode, configure the specific switch and override the default configuration pushed from the DANZ Monitoring Fabric (DMF) Controller to the switch.

The DANZ Monitoring Fabric 8.6 Deployment Guide provides detailed instructions on overriding the switch's default configuration..

DMF Interfaces

To monitor traffic, assign a role to each of the DANZ Monitoring Fabric (DMF) interfaces, which can be of the following four types:
  • Filter interfaces: ports where traffic enters the DMF. Use filter interfaces to TAP or SPAN ports from production networks.
  • Delivery interfaces: ports where traffic leaves the DMF. Use delivery interfaces to connect to troubleshooting, monitoring, and compliance tools. These include Network Performance Monitoring (NPM), Application Performance Monitoring (APM), data recorders, security (DDoS, Advanced Threat Protection, Intrusion Detection, etc.), and SLA measurement tools.
  • Filter and delivery interfaces: ports with both incoming and outgoing traffic. When placing the port in loopback mode, use a filter and delivery interface to send outgoing traffic back into the switch for further processing. To reduce cost, use a filter and delivery interface when transmit and receive cables are connected to two separate devices.
  • Service interfaces: interfaces connected to third-party services or network packet brokers, including any interface that sends or receives traffic to or from an NPB.

In addition, interfaces connected to managed service nodes and DANZ recorder nodes can be referenced in the configuration directly without assigning a role explicitly. Also, Inter-Switch Links (ISLs), which interconnect DANZ monitoring switches, are automatically detected and referred to as core interfaces.

Using the GUI to Configure a DMF Filter or Delivery Interface

To use the DANZ Monitoring Fabric (DMF) GUI to configure a fabric interface as a filter or delivery interface, perform the following steps:
  1. Select Monitoring > Interfaces from the main menu to display the DMF interfaces.
    Figure 1. DMF Interfaces
  2. Click the provision control (+) in the Interfaces table to configure a new interface.
    Figure 2. Create Interface
  3. Select the Edit option to change the configuration of an already configured interface.
  4. Select the switch and interface from the selection lists and click Next.
    The system displays the second Configuration page.
  5. Assign a name, IP address, and subnet mask to the interface.
  6. Select a radio button to assign a role to the interface:
    • Filter
    • Delivery
    • Filter and Delivery
    • Service
    Note:
    • The options available are updated based on the selection.

      For example, when selecting Filter, the system displays the following dialog box.

      Figure 3. Create Interface: Filter
    • Analytics is enabled by default. To disable Analytics for the interface, move the slider to Disabled. For information about Analytics, refer to the Analytics Node User Guide.
    • Optionally, enable the Rewrite VLAN option for a filter or a filter and delivery interface.
    • Enable the Rewrite VLAN option when configuring a Filter interface by identifying the VLAN in the Rewrite VLAN field.
  7. Complete the configuration for the specific interface role.
  8. Click Save to save the configuration.

Using the CLI to Configure a DANZ Filter or Delivery Interface

To assign a filter or delivery role to an interface, perform the following steps:
  1. From the config mode, enter the switch command, identifying the switch having the interface to configure.
    controller-1(config)# switch DMF-FILTER-SWITCH-1
    controller-1(config-switch)#
    Note: Identify the switch using the alias if configured. The CLI changes to the config-switch submode to configure the specified switch.
  2. From the config-switch mode, enter the interface command, as in the following example:
    controller-1(config-switch)# interface ethernet1
    controller-1(config-switch-if)#
    Note:To view a list of the available interfaces, enter the show switch <switch-name> interface command or press the Tab key, and the command completion feature displays a concise list of permitted values. After identifying the interface, the CLI changes to the config-switch-if mode to configure the specified interface.
  3. From the config-switch-if submode, enter the role command to identify the role for the interface. The syntax for defining an interface role (delivery, filter, filter-and-delivery, or service) is as follows:
    [no] role delivery interface-name <name> [strip-customer-vlan] [ip-address
    <ip-address>]
    [nexthop-ip <ip-address> <subnet> ]
    [no] role filter interface-name <name> [ip-address <ip-address>] {[rewrite
    vlan <vlan id (1-4094)>]} [no-analytics]
    [no] role both-filter-and-delivery interface-name <name> {[rewrite vlan <vlan id
    (1-4094)>]} [noanalytics]
    [no] role service interface-name <name>a
    The interface-name command assigns an alias to the current interface, which typically would indicate the role assigned, as in the following example:
    controller-1(config-switch-if)# role delivery interface-name TOOL-PORT-1
    Note:An interface can have only one role, and the configured interface name must be unique within the DANZ Monitoring Fabric.
    The following examples show the configuration for filter, delivery, and service interfaces:
    • Filter Interfaces
      controller-1 (config)# switch DMF-FILTER-SWITCH-1
      controller-1(config-switch)# interface ethernet1
      controller-1(config-switch-if)# role filter interface-name TAP-PORT-1
      controller-1(config-switch-if)# interface ethernet2
      controller-1(config-switch-if)# role filter interface-name TAP-PORT-2
    • Delivery Interfaces
      controller-1(config-switch-if)# switch DMF-DELIVERY-SWITCH-1
      controller-1(config-switch-if)# interface ethernet1
      controller-1(config-switch-if)# role delivery interface-name TOOL-PORT-1
      controller-1(config-switch-if)# interface ethernet2
      controller-1(config-switch-if)# role delivery interface-name TOOL-PORT-2
    • Filter and Delivery Interfaces
      controller-1(config-switch-if)# switch DMF-CORE-SWITCH-1
      controller-1(config-switch-if)# interface ethernet1
      controller-1(config-switch-if)# role both-filter-and-delivery interface-name loopback-
      port-1
      controller-1(config-switch-if)# interface ethernet2
      controller-1(config-switch-if)# role both-filter-and-delivery interface-name loopback-
      port-2
    • Service Interfaces
      controller-1(config-switch-if)# switch DMF-CORE-SWITCH-1
      controller-1(config-switch-if)# interface ethernet1
      controller-1(config-switch-if)# role service interface-name PRE-SERVICE-PORT-1
      controller-1(config-switch-if)# interface ethernet2
      controller-1(config-switch-if)# role service interface-name POST-SERVICE-PORT-1
    Note:
    1. An interface can have only one role, and the configured interface name must be unique within the DANZ Monitoring Fabric.
    2. A delivery interface will show drops under a many-to-one scenario, i.e., multiple filter interfaces pointing to a single delivery interface as per policy definition. These drops are accounted for as micro bursts occur at the egress port. For example, consider a use case of three 10G ingress ports and one 25G egress port. Even if we send a total of 25Gbps of traffic by calculation from ingress to egress, each individual ingress port still operates at 10Gbps inside the BCM chip (i.e., a total of 30G on ingress, 5Gbps traffic is still running at 10Gbps speed on the wire but with a bigger inter-frame gap). This means the ingress may oversubscribe the egress due to the 30G to 25G traffic ratio. For example, if each ingress port receives one packet at the same time, it causes 30G-to-25G over-subscription or micro-bursting (5Gbps traffic still gets processed at the ingress port’s native speed of 10Gbps). Because the egress can only process packets up to 25Gbps, one of the packets will not get dequeued promptly and accumulate inside the egress TX queue. If this pattern continues, the egress queue eventually drops packets due to the TX buffer becoming full. Therefore, expect this behavior in the case of many-to-one forwarding. After reconfiguring and using only one 25G ingress port to one 25G egress port, there is no TX drop problem.

Using the CLI to Identify a Filter Interface using Destination MAC Rewrite

The Destination MAC (D.MAC) Rewrite feature provides an option to identify the Filter interface by overriding the destination MAC address of the packet received on the filter interface. Use this feature for auto-assigned and user-configured VLANs in push-per-filter and push-per-policy modes.
Note: The D.MAC Rewrite feature VLAN preservation applies to switches running SWL OS and does not apply to 7280R/7280R2 switches running EOS.

Global Configuration

Configure this function at the filter interface level and perform the following steps using the CLI.
  1. Select a filter switch and enter the config mode.
    (config)# switch filter1
  2. Select an interface from the switch acting as the filter interface.
    (config-switch)# interface ethernet5
  3. Create a filter interface with a name and provide the MAC address to override.
    (config-switch-if)# role filter interface-name f1 rewrite dst-mac 00:00:00:00:00:03

CLI Show Commands

The following show command displays the ingress flow for the filter switch.

In the Entry value column, the filter switch contains dst MAC tlv: EthDst(00:00:00:00:00:03).

(config-policy)# show switch filter1 table ingress-flow-2
# Ingress-flow-2 Device name Entry key Entry value
-|--------------|-----------|---------------------------------------|-----------------------------------|
1 0filter1 Priority(6400), Port(5), EthType(34525) Name(p1), Data([0, 0, 0, 61]), PushVlanOnIngress(flags=[]), VlanVid(0x1), Port(1), EthDst(00:00:00:00:00:03)
2 1filter1 Priority(6400), Port(5) Name(p1), Data([0, 0, 0, 62]), PushVlanOnIngress(flags=[]), VlanVid(0x1), Port(1), EthDst(00:00:00:00:00:03)
3 2filter1 Priority(36000), EthType(35020) Name(__System_LLDP_Flow_), Data([0, 0, 0, 56]), Port(controller), QueueId(0)

The core and delivery switch in the Entry value column doesn’t contain dst MAC tlv, as shown in the following examples.

(config-policy)# show switch core1 table ingress-flow-2
# Ingress-flow-2 Device name Entry key Entry value
-|--------------|-----------|-----------------------------------------------------|----------------------------|
1 0core1 Priority(6400), Port(1), EthType(34525), VlanVid(0x1) Name(p1), Data([0, 0, 0, 60]), Port(2)
2 1core1 Priority(6400), Port(1), VlanVid(0x1) Name(p1), Data([0, 0, 0, 59]), Port(2)
3 2core1 Priority(36000), EthType(35020) Name(__System_LLDP_Flow_), Data([0, 0, 0, 57]), Port(controller), QueueId(0)
(config-policy)# show switch delivery1 table ingress-flow-2
# Ingress-flow-2 Device name Entry key Entry value
-|--------------|-----------|-----------------------------------------------------|----------------------------|
1 0delivery1 Priority(6400), Port(1), EthType(34525), VlanVid(0x1) Name(p1), Data([0, 0, 0, 64]), Port(6)
2 1delivery1 Priority(6400), Port(1), VlanVid(0x1) Name(p1), Data([0, 0, 0, 63]), Port(6)
3 2delivery1 Priority(36000), EthType(35020) Name(__System_LLDP_Flow_), Data([0, 0, 0, 58]), Port(controller), QueueId(0)

Troubleshooting

To troubleshoot the scenario where the provided destination MAC address is attached incorrectly to the filter interface. The ingress-flow-2 table above will have a destination MAC rewrite tlv on the filter switch, but no such tlv appears on the core or delivery switch.

As an alternative, drop into the bash of the filter switch to check the flow and destination MAC rewrite.

Use the following commands for the ZTN CLI of the filter switch.

(config)# connect switch filter1
(ztn-config) debug admin
filter1> enable
filter1# debug bash

The following command prints the flow table of the filter switch.

root@filter1:~# ofad-ctl gt ING_FLOW2
Figure 4. Filter Switch Flow Table
The following command shows the policy flow from the filter switch to the delivery switch. The filter switch will have the assigned destination MAC in the match-field.
(config)# show policy-flow
# Policy Name SwitchPkts Bytes PriT MatchInstructions
-|-----------|-----------------------------------|----|-----|----|-|------------------------|------------------|
1 p1core1 (00:00:52:54:00:15:94:88) 00 6400 1 eth-type ipv6,vlan-vid 1 apply: name=p1 output: max-length=65535, port=2
2 p1core1 (00:00:52:54:00:15:94:88) 00 6400 1 vlan-vid 1 apply: name=p1 output: max-length=65535, port=2
3 p1delivery1 (00:00:52:54:00:00:11:d2) 00 6400 1 vlan-vid 1 apply: name=p1 output: max-length=65535, port=6
4 p1delivery1 (00:00:52:54:00:00:11:d2) 00 6400 1 eth-type ipv6,vlan-vid 1 apply: name=p1 output: max-length=65535, port=6
5 p1filter1 (00:00:52:54:00:d5:2c:05) 00 6400 1apply: name=p1 push-vlan: ethertype=802.1Q (33024),set-field: match-field/type=vlan-vid, match-field/vlan-tag=1,output: max-length=65535, port=1,set-field: match-field/eth-address=00:00:00:00:00:03 (XEROX), match-field/type=eth-dst
6 p1filter1 (00:00:52:54:00:d5:2c:05) 00 6400 1 eth-type ipv6apply: name=p1 push-vlan: ethertype=802.1Q (33024),set-field: match-field/type=vlan-vid, match-field/vlan-tag=1,output: max-length=65535, port=1,set-field: match-field/eth-address=00:00:00:00:00:03 (XEROX), match-field/type=eth-dst

Considerations

  1. The destination MAC rewrite cannot be used on the filter interface where timestamping is enabled.
  2. The destination MAC rewrite will not work when the filter interface is configured as a receive-only tunnel interface.

Using the GUI to Identify a Filter Interface using Destination MAC Rewrite

In the UI, configure the Rewrite Dest. MAC Address for a Filter Interface using one of the two workflows detailed below. The first workflow uses the Monitoring > Interfaces UI, while the second uses the Fabric > Interfaces UI. To use the second workflow, proceed to step 6, detailed below.

Workflow One

Using either the Monitoring > Interfaces (or the Monitoring > Interfaces > Filter Interfaces) page, proceed to the following workflow.

Create Interface

  1. Click the table action icon + button to create a filter interface.
  2. After selecting the switch interface in the interface tab, use the Configure tab to assign roles.
  3. Select the Filter radio button for the interface and use the Rewrite Dest.MAC Address input to configure the MAC address to override.
    Figure 5. Create Interface

     

  4. Click Save to continue.
Edit Interface
  1. Select the row menu of the filter interface to configure or edit, and select Edit.
    Figure 6. DANZ Monitoring Fabric (DMF) Interfaces - Edit
Workflow Two

When using the Fabric > Interfaces page, use the following workflow.

  1. In the Configure step, use the Rewrite Dest. MAC Address input to configure the MAC address to override.
  2. Select the row menu of the switch interface associated with the filter interface to configure and select Configure.
    Figure 7. Configure Interface
  3. In the DMF tab, select the Rewrite Dest.MAC Address field to enter the MAC address to be overridden.
    Figure 8. Edit Interface DMF Rewrite Dest. MAC Address
  4. Click Save to continue.

Forward Slashes in Interface Names

DANZ Monitoring Fabric (DMF) supports using forward slashes (/) in interface names to aid in managing interfaces in the DMF fabric. For example, when:

  • Defining the SPAN device name and port numbers which generally contain a forward slash (eth2/1/1) in the name for easy port identification.
  • Using separate SPAN sessions for Tx and Rx traffic, when there are multiple links from a device to a filter switch.
The following is the comprehensive list of DMF interfaces supporting the use of a forward slash:
  • filter interface
  • unmanaged service interface
  • filter interface group
  • recorder node interface
  • delivery interface
  • MLAG interface
  • delivery interface group
  • LAG interface
  • filter-and-delivery interface
  • GRE tunnel interface
  • PTP interface
  • VXLAN tunnel interface
  • managed service interface
 
Note: An interface name cannot start with a forward slash. However, multiple forward slashes are allowed while adhering to the maximum allowed length limitation.

Configuration

The configuration of DMF filter interfaces remains unchanged. This feature relaxes the existing naming convention by allowing a forward slash to be a part of the name.

The following are several examples:

For switch interfaces, for any of the roles: both-filter-and-delivery, delivery, filter, ptp, service
dmf-controller-1(config-switch-if)# role role interface-name a/b/c
For filter and delivery interface groups:
dmf-controller-1(config)# filter-interface-group a/b/c 
dmf-controller-1(config)# delivery-interface-group a/b/c
Adding interfaces or interface groups to a policy:
dmf-controller-1(config-policy)# filter-interface f1/a/b 
dmf-controller-1(config-policy)# filter-interface-group f/a/b 
dmf-controller-1(config-policy)# delivery-interface d1/a/b 
dmf-controller-1(config-policy)# delivery-interface-group d/a/b
Recorder Node interface:
dmf-controller-1(config)# recorder-fabric interface a/b/c
For a managed service:
dmf-controller-1(config-managed-srv-flow-diff)# l3-delivery-interface a/b/c
MLAG interface:
dmf-controller-1(config-mlag-domain)# mlag-interface a/b/c 
dmf-controller-1(config-mlag-domain-if)# role delivery interface-name a/b/c
LAG interface:
dmf-controller-1(config-switch)# lag-interface a/b/c
GRE tunnel interface:
dmf-controller-1(config-switch)# gre-tunnel-interface a/b/c
VXLAN tunnel interface:
dmf-controller-1(config-switch)# vxlan-tunnel-interface a/b/c

Show Commands

There are no new show commands. The existing show running-config and show this commands for the configurations mentioned earlier should display the interface names without any issue.

Using Interface Groups

Create an interface group consisting of one or more filter or delivery interfaces. It is often easier to refer to an interface group when creating a policy than to identify every interface to which the policy applies explicitly.

Use an address group in multiple policies, referring to the IP address group by name in match rules. If no subnet mask is provided in the address group, it is assumed to be an exact match. For example, in an IPv4 address group, the absence of a mask implies a mask of /32. For an IPv6 address group, the absence of a mask implies a mask of /128.

Identify only a single IP address group for a specific policy match rule. Address lists with both src-ip and dst-ip options cannot exist in the same match rule.

Using the GUI to Configure Interface Groups

To create an interface group from the Monitoring > Interfaces table, perform the following steps:
  1. Select the Monitoring > Interfaces option.
    Figure 9. Creating Interface Groups from Monitoring > Interfaces
  2. On the Interfaces table, enable the checkboxes for the interfaces to include in the group.
  3. Click the Menu control at the top of the table and select **Group Selected Interfaces.
  4. Complete the dialog that appears to assign a descriptive name to the interface group.
    Note: Optionally, define an interface group using the Monitoring > Interface > Groups UI.

Using the CLI to Configure Interface Groups

The following example illustrates the configuration of two interface groups: a filter interface group TAP-PORT-GRP, and a delivery interface group TOOL-PORT-GRP.
controller-1(config-switch)# filter-interface-group TAP-PORT-GRP
controller-1(config-filter-interface-group)# filter-interface TAP-PORT-1
controller-1(config-filter-interface-group)# filter-interface TAP-PORT-2
controller-1(config-switch)# delivery-interface-group TOOL-PORT-GRP
controller-1(config-delivery-interface-group)# delivery-interface TOOL-PORT-1
controller-1(config-delivery-interface-group)# delivery-interface TOOL-PORT-2

To view information about the interface groups in the DMF fabric, enter the show filter-interface-group command, as in the following examples:

Filter Interface Groups

controller-1(config-filter-interface-group)# show filter-interface-group
! show filter-interface-group TAP-PORT-GRP
# Name Big Tap IF NameSwitch IF NameDirectionSpeed State VLANTag
-|------------|--------------------------|-----------------|----------|---------|-------|-----|--------|
1 TAP-PORT-GRP TAP-PORT-1 DMF-CORE-SWITCH-1 ethernet17rx 100Gbps up0
2 TAP-PORT-GRP TAP-PORT-2 DMF-CORE-SWITCH-1 ethernet18rx 100Gbps up0
controller1(config-filter-interface-group)#
Delivery Interface Groups
controller1(config-filter-interface-group)# show delivery-interface-group
! show delivery-interface-group DELIVERY-PORT-GRP
# NameBig Tap IF Name Switch IF NameDirectionSpeed Rate limit State Strip Forwarding Vlan
-|-----------------|---------------|---------------------|----------|---------|------|---------|-----|---------------------|
1 TOOL-PORT-GRP TOOL-PORT-1 DMF-DELIVERY-SWITCH-1 ethernet15 tx10Gbps upTrue
2 TOOL-PORT-GRP TOOL-PORT-2 DMF-DELIVERY-SWITCH-1 ethernet16 tx10Gbps upTrue
controller-1(config-filter-interface-group)#

Switch Light CLI Operational Commands

As a result of upgrading the Debian distribution to Bookworm, the original Python CLI (based on python2) was removed, as the interaction with the DANZ Monitoring Fabric (DMF) is performed mainly from the Controller.

However, several user operations involve some of the commands used on the switch. These commands are implemented in the new CLI (based on python3) in Switch Light in the Bookworm Debian distribution.

The Zero-Trust Network (ZTN) Security CLI is the default shell when logged into the switch.

Note: The following commands are only available on Dell and Arista Switch platforms running Switch Light OS.

Operational Commands

After connecting to the switch and from the DMF Controller, use the debug admin command to enter the switch admin CLI from the ZTN CLI.

Enter the exit command to leave the switch admin CLI, as illustrated in the following example.
DMF-CONTROLLER# connect switch dmf-sw-7050sx3-1
Switch Light OS SWL-OS-DMF-8.6.x(0), 2024-05-16.08:26-17f56f6
Linux dmf-sw-7050sx3-1 4.19.296-OpenNetworkLinux #1 SMP Thu May 16 08:35:25 UTC 2024 x86_64
Last login: Tue May 21 10:39:05 2024 from 10.240.141.151

Switch Light ZTN Manual Configuration. Type help or ? to list commands.

(ztn-config) debug admin
(admin)
(admin) exit

(ztn-config)

Help

The following commands are available under the admin shell.
(admin) help

Documented commands (type help <topic>):
========================================
EOFcopyexithelppingping6quitrebootreloadshow

Ping

Use the ping command to test a host's accessibility using its IPV4 address.
(admin) ping 10.240.141.151
PING 10.240.141.151 (10.240.141.151) 56(84) bytes of data.
64 bytes from 10.240.141.151: icmp_seq=1 ttl=64 time=0.238 ms
64 bytes from 10.240.141.151: icmp_seq=2 ttl=64 time=0.206 ms
64 bytes from 10.240.141.151: icmp_seq=3 ttl=64 time=0.221 ms
64 bytes from 10.240.141.151: icmp_seq=4 ttl=64 time=0.161 ms

--- 10.240.141.151 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3049ms
rtt min/avg/max/mdev = 0.161/0.206/0.238/0.028 ms

Ping6

Use the ping6 command to test a host's accessibility using its IPV6 address.
(admin) ping6 fe80::3673:5aff:fefb:9dec
PING fe80::3673:5aff:fefb:9dec(fe80::3673:5aff:fefb:9dec) 56 data bytes
64 bytes from fe80::3673:5aff:fefb:9dec%ma1: icmp_seq=1 ttl=64 time=0.490 ms
64 bytes from fe80::3673:5aff:fefb:9dec%ma1: icmp_seq=2 ttl=64 time=0.232 ms
64 bytes from fe80::3673:5aff:fefb:9dec%ma1: icmp_seq=3 ttl=64 time=0.218 ms
64 bytes from fe80::3673:5aff:fefb:9dec%ma1: icmp_seq=4 ttl=64 time=0.238 ms

--- fe80::3673:5aff:fefb:9dec ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3069ms
rtt min/avg/max/mdev = 0.218/0.294/0.490/0.113 ms

Copy Tech-Support Data

Use the copy tech-support data command to collect the switch support bundle. The tech support command executed from the Controller collects support bundles from all the switches in the fabric.

To collect tech support data for an individual switch, only run the copy tech-support data command from that switch. Perform this action when the switch isn't accessible from the Controller or when collecting a support bundle only for that switch.
(admin) copy tech-support data
Writing /mnt/onl/data/tech-support_240521104337.txt.gz...
tech-support_240521104337.txt.gz created in /mnt/onl/data/
(admin)

Show Commands

The following show commands are available under the admin shell.

Show Clock
(admin) show clock
Tue May 21 10:46:20 2024
(admin)
Show NTP
(admin) show ntp
remote refid st t when pollreach delayoffset jitter
=======================================================================================================
 10.130.234.12 .STEP.16 u- 10240 0.0000 0.0000 0.0002
*time2.google.com.GOOG. 1 u649 102437740.0469 1.6957 0.4640
+40.119.6.22825.66.230.13 u601 102437750.6083-0.5323 7.8069
(admin)

Show Controller

The show controller command displays all the configured Controllers and their connection status and role.
(admin) show controller
IP:PortProto StateRole #Aux
10.240.141.151:6653tcp CONNECTEDACTIVE2
10.240.189.233:6653tcp CONNECTEDSTANDBY 2
 127.0.0.1:6653tcp CONNECTEDSTANDBY 1
(admin)
Use the history and statistics options in the show controller command to obtain additional information.
(admin) show controller history | statistics
(admin)
The show controller history command displays the history of controller-to-switch connections and disconnections.
(admin) show controller history
Mon May 20 15:53:42 2024 tcp:127.0.0.1:6653:0 - Connected
Mon May 20 15:53:43 2024 tcp:127.0.0.1:6653:1 - Connected
Mon May 20 15:54:46 2024 tcp:127.0.0.1:6653:1 - Disconnected
Mon May 20 15:54:46 2024 tcp:127.0.0.1:6653:0 - Disconnected
Mon May 20 08:57:07 2024 tcp:127.0.0.1:6653:0 - Connected
Mon May 20 08:57:07 2024 tcp:127.0.0.1:6653:1 - Connected
Mon May 20 08:57:07 2024 tcp:10.240.141.151:6653:0 - Connected
Mon May 20 08:57:07 2024 tcp:10.240.141.151:6653:1 - Connected
Mon May 20 08:57:07 2024 tcp:10.240.141.151:6653:2 - Connected
Mon May 20 11:16:07 2024 tcp:10.240.189.233:6653:0 - Connected
Mon May 20 11:16:19 2024 tcp:10.240.189.233:6653:1 - Connected
Mon May 20 11:16:19 2024 tcp:10.240.189.233:6653:2 - Connected
(admin)
The show controller statistics command displays connection statistics, including keep-alive timeout, timeout threshold count, and other important information, as shown in the following example.
(admin) show controller statistics
Connection statistics report
Outstanding async op count from previous connections: 0
Stats for connection tcp:10.240.141.151:6653:0:
Id: 131072
Auxiliary Id: 0
Controller Id: 0
State: Connected
Keepalive timeout: 2000 ms
Threshold: 3
Outstanding Echo Count: 0
Tx Echo Count: 46438
Messages in, current connection: 52887
Cumulative messages in: 52887
Messages out, current connection: 52961
Cumulative messages out: 52961
Dropped outgoing messages: 0
Outstanding Async Operations: 0
Stats for connection tcp:10.240.189.233:6653:0:
Id: 112066561
Auxiliary Id: 0
Controller Id: 1
State: Connected
Keepalive timeout: 2000 ms
Threshold: 3
Outstanding Echo Count: 0
Tx Echo Count: 42269
Messages in, current connection: 43108
Cumulative messages in: 43108
Messages out, current connection: 43114
Cumulative messages out: 43114
Dropped outgoing messages: 0
Outstanding Async Operations: 0
(admin)
The show log command displays log messages from the Syslog file.
(admin) show log
2024-05-20T15:53:04+00:00 localhost syslog-ng[3787]: NOTICE syslog-ng starting up; version='3.38.1'
2024-05-20T15:52:51+00:00 localhost kernel: NOTICE Linux version 4.19.296-OpenNetworkLinux (bsn@sbs3) (gcc version 12.2.0 (Debian 12.2.0-14)) #1 SMP Thu May 16 08:35:25 UTC 2024
2024-05-20T15:52:51+00:00 localhost kernel: INFO Command line:reboot=p acpi=on Aboot=Aboot-norcal6-6.1.10-14653765 platform=magpie sid=Calpella console=ttyS0,9600n8 tsc=reliable pcie_ports=native pti=off reassign_prefmem amd_iommu=off onl_mnt=/dev/mmcblk0p1 quiet=1 onl_platform=x86-64-arista-7050sx3-48yc12-r0 onl_sku=DCS-7050SX3-48YC12
2024-05-20T15:52:51+00:00 localhost kernel: INFO BIOS-provided physical RAM map:
….
….
2024-05-21T10:40:31-07:00 dmf-sw-7050sx3-1 systemd[1]: INFO Starting logrotate.service - Rotate log files...
2024-05-21T10:40:32-07:00 dmf-sw-7050sx3-1 systemd[1]: INFO logrotate.service: Deactivated successfully.
2024-05-21T10:40:32-07:00 dmf-sw-7050sx3-1 systemd[1]: INFO Finished logrotate.service - Rotate log files.
2024-05-21T10:45:32-07:00 dmf-sw-7050sx3-1 systemd[1]: INFO Starting logrotate.service - Rotate log files...
2024-05-21T10:45:33-07:00 dmf-sw-7050sx3-1 systemd[1]: INFO logrotate.service: Deactivated successfully.
2024-05-21T10:45:33-07:00 dmf-sw-7050sx3-1 systemd[1]: INFO Finished logrotate.service - Rotate log files.
(admin)

Reboot and Reload

Use the reboot and reload commands to either reboot or reload the switch, as needed.
(admin) help reboot

Reboot the switch.

(admin) help reload

Reload the switch.

(admin) reboot
Proceed with reboot [confirm]?

(admin) reload
Proceed with reload [confirm]?
(admin)
*sFlow® is a registered trademark of Inmon Corp.

Introduction to DMF and Overview

This chapter introduces the DANZ Monitoring Fabric (DMF) and the user interfaces for out-of-band monitoring and configuration.

DANZ Monitoring Fabric (DMF) is a cloud-first Network Packet Broker (NPB) that provides a single pane of glass with an integrated visibility fabric. The DMF solution includes NPB functionality with the DMF Recorder Node and the Analytics Node for deeper monitoring and pervasive security of out-of-band workloads in hybrid cloud deployments.

DMF leverages an SDN-controlled fabric using high-performance, open networking (white box/brite box) switches and industry-standard x86 servers to deploy highly scalable and flexible network visibility and security solutions. Traditional, box-based, hardware-centric NPBs are architecturally limited when trying to meet the evolving security and visibility demands of Cloud Native data centers. DMF addresses the challenges of traditional NPB solutions by enabling a scale-out fabric for enterprise-wide security and monitoring, a single pane of glass for operational simplicity, and multi-tenancy for multiple IT teams, including NetOps, DevOps, and SecOps.

Out-of-band Monitoring with DANZ Monitoring Fabric

As data center networks move toward 40/100G designs, cloud computing, hyper scale data analytics, and 5G mobile services, traffic monitoring must transition to next-generation designs. To manage the modern data center, much network traffic must be copied and aggregated from TAP or SPAN ports and forwarded to monitoring and analysis tools. These tools, used in managing network performance, application performance, security, and compliance, leverage other systems such as data recorders, intrusion detection systems, data leakage detectors, SLA measurement devices, and other traffic analyzers like Wireshark.

DANZ Monitoring Fabric (DMF) uses high-performance open-networking switches to deliver an open, production-grade, and scalable monitoring solution based on Software Defined Networking (SDN) technology. The centralized DMF Controller provides flexibility, simplifies policy management and monitoring fabric configuration, and supports cost-effective monitoring of data centers and remote sites or branches with up to several thousand TAP and SPAN ports.

DMF architecture, inspired by hyper scale networking designs, consists of the following components:

  • HA pair of SDN-enabled DMF Controllers (VMs or hardware appliances), enable simplified and centralized configuration, monitoring, and troubleshooting.
  • Arista Networks SDN-enabled Switch Light OS is a production-grade, ONIE-deployable, lightweight OS that runs on DMF Ethernet switches.
  • Open Ethernet switches (white box/brite box) use the same merchant silicon ASICs used by most incumbent switch vendors and are widely deployed in production data center networks. These switches ship with an Open Network Install Environment (ONIE) for automatic and vendor-agnostic installation of third-party network OS.
  • DANZ Service Nodes (optional), a Data Plane Development Kit (DPDK)-powered, x86-based appliances that connect to the DMF singly or as part of a service node chain. The service node provides advanced packet functions, such as deduplication, packet slicing, header stripping, regex matching, packet masking, UDP replication, and IPFIX/NetFlow generation.
  • DANZ Recorder Nodes (optional) are x86-based appliances connected to the DMF and are managed via the DMF Controller to provide petabyte packet recording, querying, and replay functions.
  • Analytics Nodes (optional) are x86-based appliances that integrate with the DMF to provide multi-terabit, security, and performance analytics with configurable, historical time-series dashboards.
Figure 1. Out-of-Band Monitoring with DANZ Monitoring Fabric
DMF lets a network operator easily deploy data center-wide monitoring with the following benefits:
  • Organization-wide visibility: delivers traffic from any TAP to any tool at any time across one or multiple locations.
  • Flexible, scale-out fabric deployment: supports a large number of 1G, 10G, 25G, 40G, and 100G ports (thousands per fabric).
  • Multi-tenant tap and tool sharing: supports monitoring by multiple teams to enable Monitoring Fabric as a Service.
  • Massive operational simplification: provides a single pane of glass for provisioning, management, monitoring, and debugging through a centralized SDN Controller. This feature eliminates needing a box-by-box configuration.
  • Centralized programmability: a REST-based API architecture enables event-based, centralized policy management and automation for integrated end-to-end IT work flows. This feature leverages DMF Service Nodes, Analytics Nodes, and Recorder Nodes.
  • Dramatic cost savings: Achieving a significant reduction in the total cost of ownership by using open Ethernet switches in combination with industry-standard x86 servers, optimized usage of tools, and SDN-enabled operations and automation.

Using the DANZ Monitoring Fabric CLI

Before connecting to the DANZ Monitoring Fabric (DMF) Controller, ensure the DMF application is running. Log in to the DMF Controller using the local console or SSH to the address assigned to the DMF Controller during installation.
Note: Make all fabric switch configuration changes using the Controller CLI, which provides configuration options in the config-switch submode for each switch. Do not log in to the switch to make changes directly using the switch CLI.
Mode and sub-modes divide CLI commands, which restrict commands to the appropriate context. The main modes and their available commands are as follows:
  • login mode: commands available immediately after logging in, with the broadest possible context.
  • enable mode: commands that are available only after entering the enable command.
  • config mode: commands that significantly affect system configuration and can only be entered after entering the configure command. The user can also access submodes from this mode.

Enter submodes from config mode to provision specific monitoring fabric objects. For example, the switch switchname command changes the CLI prompt to (config-switch)# and lets the user configure the switch identified by the switch name.

When the user logs in via SSH to the Controller, the CLI appears in login mode, where the default prompt is the system name followed by a greater than sign (>), as shown below:
controller-1>
To change the CLI to enable mode, enter the enable command. The default prompt for enable mode is the system name followed by a pound sign (#), as shown below:
controller-1> enable
controller-1#
To change to config mode, enter the configure command. The default prompt for config mode is the system name followed by (config)#, as shown below:
controller-1> config
controller-1(config)#
To change to a submode, enter the command from config mode, followed by any object identifier required, as in the following example:
controller-1(config)# switch filter-switch-1
controller-1(config-switch)# interface ethernet54
controller-1(config-switch-if)#
To return to enable mode, type end, as shown below:
controller-1(config)# end
controller-1#
To view the path to the current CLI prompt, enter the show this command from any nested submode, as in the following example:
controller-1(config-switch-if)# show this
! switch
switch filter-switch-1
interface ethernet54
To view details about the configuration, enter the show this details command, as in the following example:
controller-1(config-switch-if)# show this details
! switch
switch filter-switch-1
!
interface ethernet54
no force-link-up
no optics-always-enabled
no shutdown
To view a list of available commands in the current or submode, enter the help command.
controller-1> help
For help on specific commands: help <command>
Commands:
%<n> Move job to foreground
debug
echo Print remaining arguments
enable Enter enable mode
exit Exit submode
help Show help
historyShow commands recently executed
logout Logout
no Prefix existing commands to delete item
ping Send echo messages
ping6Send echo messages
profileConfigure user profile
reauth Reauthenticate
setManage CLI sessions settings
show
supportGenerate diagnostic data bundle for technical support
terminal Manage CLI sessions settings
topicShow documentation on topic
upload Upload diagnostic data bundle for technical support
watchShow output of other commands
whoami Identify the current authenticated account
workflow Show workflow documentation
controller-1>
To view detailed online help for the command, enter the help command followed by the command.
controller-1> help support
Support Command: Generate diagnostic data bundle for technical support
Support Command Syntax:no support skip-switches skip-cluster skip-service-nodes 
 skip-recorder-nodes sequential support [[skip-switches]
 [skip-cluster] [skip-service-nodes]
 [skip-recorder-nodes] [sequential]]
Next Keyword Descriptions:
sequential:Use sequential (non-parallel) fallback collection mode, which will be slower 
 but use fewer resources.
skip-cluster:Skip cluster information from the collection.
skip-recorder-nodes: Skip recorder nodes information from the collection.
skip-service-nodes:Skip service nodes information from the collection.
skip-switches: Skip switches information from the collection.
Support Command: Generate diagnostic data bundle for technical support
Support Command Syntax:no support skip-switches skip-cluster skip-service-nodes
 skip-recorder-nodes sequential support [[skip-switches]
 [skip-cluster] [skip-service-nodes] [skip-recorder-nodes] [sequential]]
Next Keyword Descriptions:
sequential:Use sequential (non-parallel) fallback collection mode, which will be slower
 but use fewer resources.
skip-cluster:Skip cluster information from the collection.
skip-recorder-nodes: Skip recorder nodes information from the collection.
skip-service-nodes:Skip service nodes information from the collection.
skip-switches: Skip switches information from the collection.
controller-1>
To display the options available for a command or keyword, enter the command or keyword followed by a question mark (?).
controller-1> support ?
<cr>
sequential Use sequential (non-parallel) fallback collection mode, which will be slower
 but use fewer resources.
skip-cluster Skip cluster information from the collection.
skip-recorder-nodesSkip recorder nodes information from the collection.
skip-service-nodes Skip service nodes information from the collection.
skip-switchesSkip switches information from the collection.
controller-1>
To view any command's permitted values or keywords, enter the command followed by a space and press the<Tab> key. The command completion feature displays a concise list of permitted values, as in the following example:
controller-1> support <TAB>
<cr> sequential skip-cluster
skip-recorder-nodes skip-service-nodes skip-switches
controller-1>

For information about managing administrative access to the DMF Controller, refer to the DANZ Monitoring Fabric 8.6 Deployment Guide.

Using the DANZ Monitoring Fabric GUI

The DANZ Monitoring Fabric (DMF) Graphical User Interface (GUI) performs similar operations to the CLI using a graphic user interface instead of text commands and options. The DMF GUI is compatible with recent versions of any of the following supported browsers:
  • Mozilla Firefox
  • Google Chrome
  • Microsoft Edge
  • Internet Explorer
  • Apple Safari

To connect to the DMF GUI, use the DMF Controller IP address. Use the virtual IP (VIP) assigned to the cluster if configured during deployment. Using the VIP ensures that the user connects to the current active Controller, regardless of any failover that may have occurred.

Use the active Controller for all configuration operations and to obtain reliable information when monitoring DMF. The standby Controller is provided only for redundancy if the active Controller becomes unavailable. Do not perform any configuration using the standby Controller, and any information displayed may not be accurate. The figure below illustrates connecting to the DMF GUI using HTTPS (port 443) at the IP address 192.168.17.233
Figure 2. Connecting to the DANZ Monitoring Fabric GUI
Connecting to the Controller for the first time may result in a security exception prompt (message) because the Controller HTTPS server uses an unknown (self-signed) certificate authority.
Note: When using Internet Explorer, the login attempt may fail if the system time is different than the Controller time. To remedy this, ensure the system used to log in to the Controller is synchronized with the Controller.
After accepting the prompts, the system displays the login prompt, shown in the figure below.
Figure 3. DANZ Monitoring Fabric GUI Login Prompt

Use the admin username and password configured for the DMF Controller during installation or any user account and password configured with administrator privileges. A user in the read-only group will have access to options for monitoring fabric configuration and activity but cannot change the configuration.

Figure 4. DANZ Monitoring Fabric GUI Main Menu
When logging in to the DMF GUI, a landing page appears. This page shows the DMF Controller Overview, dashboard, and a menu bar (pictured above) with sub-menus containing options for setting up DMF and monitoring network activity. The menu bar includes the following sub-menus:
  • Fabric: manage DMF switches and interfaces.
  • Monitoring: manage DMF policies, services, and interfaces.
  • Maintenance: configure fabric-wide settings (clock, SNMP, AAA, sFlow®*, Logging, Analytics Configuration).
  • Integration: manage the integration of vCenter instances to allow monitoring traffic using DMF.
  • Security: manage administrative access.
  • A profile page that displays or edits user preferences, the ability to change the password or sign out.
The newly designed dashboard displays information about the Controller, including switches, interfaces, policies, and Smart Nodes.
Figure 5. DMF Controller Overview
The header displays the following basic information about the Controller:
  • Active IP address
  • Standby IP address
  • Virtual IP address
  • Redundancy Status - The status contains an informational tool tip that can be hovered for more details.
Four cards control the type of content displayed on the main section of the page. The cards are:
  • Controller Health
  • Switch Health
  • Policy Health
  • Smart Node Health
Note: This dashboard is on by default in the Settings page under the Navigation section. Toggling off displays the previous dashboard, as illustrated below.
Figure 6. Legacy Dashboard

DMF Features Page

To navigate the DMF Features Page, click on the gear icon in the navigation bar.
Figure 7. Gear Icon

Page Layout

All fabric-wide configuration settings required in advanced use cases for deploying DMF policies appear in the new DMF Features Page.

Figure 8. DMF Features Page

The fabric-wide options used with DMF policies include the following:

Table 1. Feature Set
Auto VLAN Mode Auto VLAN Range
Auto VLAN Strip Control Plane Lockdown Mode
CRC Check Custom Priority
Device Deployment Mode Global PTP Settings
Inport Mask Match Mode
Policy Overlap Limit Policy Overlap Limit Strict
Retain User Policy VLAN Timestamp Settings
Tunneling VLAN Preservation
Each card on the page corresponds to a feature set.
Figure 9. Feature Set Card
The UI displays the following:
  • Feature Title
  • A brief description
  • View / Hide Detailed Information
  • Current Setting
  • Edit Link - Use the Edit button (pencil icon) to change the value.

View Detailed Information

Each configuration option has detailed information. For more details, click the View Detailed Information link on each card.
Figure 10. View Detailed Information

Feature Settings

Auto VLAN Strip

  1. A toggle button controls the configuration of this feature. Locate the corresponding card and move the toggle button.
    Figure 11. Toggle Switch
  2. A confirm window pops up, displaying the corresponding prompt message. Select the Enable button to confirm the configuration changes or the Cancel button to cancel the configuration. Conversely, to disable the configuration, select Disable.
    Figure 12. Confirm Enable
    Figure 13. Confirm Disable
  3. Review any warning messages that appear in the confirmation window during the configuration process.
    Figure 14. Warning Message - Changing

The following feature sets work in the same manner as the Auto VLAN Strip feature described above.

  • CRC Check
  • Policy Overlap Limit Strict
  • Custom Priority
  • Retain User Policy VLAN
  • Inport Mask
  • Tunneling

Auto VLAN Mode

  1. Control the configuration of this feature using the Edit icon by locating the corresponding card and clicking on the pencil icon.
    Figure 15. Auto VLAN Mode Config
  2. A confirmation edit dialogue window appears, displaying the corresponding prompt message.
    Figure 16. Edit VLAN Mode
  3. To configure different modes, click the drop-down arrow to open the menu.
    Figure 17. Drop-down Example
  4. From the drop-down menu, select and click on the desired mode.
    Figure 18. Push Per Policy
  5. Alternatively, enter the desired mode name in the input area.
    Figure 19. Push Per Policy
  6. Click the Submit button to confirm the configuration changes or the Cancel button to discard the changes.
    Figure 20. Submit Button
  7. After successfully setting the configuration, the current configuration status displays next to the edit button.
    Figure 21. Current Configuration Status

The following feature sets work in the same manner as the Auto VLAN Mode feature described above.

  • Device Deployment Mode
  • Match Mode

Auto VLAN Range

  1. Control the configuration of this feature using the Edit icon by locating the corresponding card and clicking on the pencil icon.
    Figure 22. Edit Auto VLAN Range
  2. A configuration edit dialogue window pops up, displaying the corresponding prompt message. The Auto VLAN Range defaults to 1 - 4094.
    Figure 23. Edit Auto VLAN Range
  3. Click on the Custom button to configure the custom range.
    Figure 24. Custom Button
  4. Adjust range value (minimum value: 1, maximum value: 4094). There are three ways to adjust the value of a range:
    • Directly enter the desired value in the input area, with the left side representing the minimum value of the range and the right side representing the maximum value.
    • Adjust the value by dragging the slider using a mouse. The left knob represents the minimum value of the range, while the right knob represents the maximum value.
    • Use the up and down arrow buttons in the input area to adjust the value accordingly. Pressing the up arrow increments the value by 1, while pressing the down arrow decrements it by 1.
  5. Click the Submit button to confirm the configuration changes or the Cancel button to discard the changes.
  6. After successfully setting the configuration, the current configuration status displays next to the edit button.
    Figure 25. Configuration Change Success

Policy Overlap Limit

  1. Control the configuration of this feature using the Edit icon by locating the corresponding card and clicking on the pencil icon.
    Figure 26. Policy Overlap Limit
  2. A configuration edit dialogue window pops up, displaying the corresponding prompt message. By default, the Policy Overlap Limit is 4.
    Figure 27. Edit Policy Overlap Limit
  3. Adjust the Value (minimum value: 0, maximum value: 10). There are two ways to adjust the value:
    • Directly enter the desired value in the input area.
    • Use the up and down arrow buttons in the input area to adjust the value accordingly. Pressing the up arrow increments the value by 1, while pressing the down arrow decrements it by 1.
  4. Click the Submit button to confirm the configuration changes or the Cancel button to discard the changes.
  5. After successfully setting the configuration, the current configuration status displays next to the edit button.
    Figure 28. Policy Overlap Limit Change Success

VLAN Preservation

  1. Control the configuration of this feature using the Edit icon by locating the corresponding card and clicking on the pencil icon.
    Figure 29. VLAN Preservation Feature Set
  2. A configuration edit dialogue window appears displaying the corresponding prompt message. The VLAN Preservation defaults to:
    • Preserve User Configured VLANS: Off
    • Preserve VLAN: No VLAN Configured
  3. To configure Preserve User Configured VLANs, toggle on the switch.
    Figure 30. Edit VLAN Preservation Configuration
  4. To configure Preserve VLAN, click the Add VLAN button to add a configuration area for preserving the VLAN value.
    Figure 31. Preserve VLAN - Add VLAN
  5. Click the drop-down button. There are two ways to configure the preserved VLAN value (minimum value: 1, maximum value: 4094) and a method to delete an entry.
    Figure 32. VLAN Single Example
    • Add Single: Choose Single in the VLAN drop-down menu, and type in the value in the input area.
      Figure 33. Add Single VLAN
    • Add Range: Choose Range in the VLAN drop-down menu, and type in the input area's minimum and maximum values.
      Figure 34. Add VLAN Range
    • Delete: Since there must be a corresponding number in the value input area when submitting the configuration, when accidentally adding multiple redundant VLAN configuration areas, delete the corresponding rows by clicking the red trash can icon .
    Note: The feature supports combinations of any number of single values and any number of range values.
  6. Click the Submit button confirm the configuration changes or the Cancel button to discard the changes.
  7. After successfully setting the configuration, the current configuration status displays next to the edit button.
    Figure 35. Preserve VLAN Configuration Change

Global PTP Settings

  1. Control the configuration of this feature using the Edit icon by locating the corresponding card and clicking on the pencil icon.
    Figure 36. Global PTP Settings
  2. A configuration edit dialogue window appears displaying the corresponding prompt message. By default, these features are not configured. Enter the desired configuration value in the corresponding input area. Hover over the question mark icon to obtain additional explanatory information.
    Figure 37. Edit PTP Settings
  3. Click the Submit button to confirm the configuration changes or the Cancel button to discard the changes.
  4. After successfully setting the configuration, the current configuration status displays next to the edit button.
    Figure 38. PTP Timestamping Configuration Change

Feature Setting Notification Message

Successfully configuring a feature results in a success notification message pop-up with specific details.
Figure 39. Success Message
Whenever an error occurs during the configuration of a feature, an error notification message pops up along with specific details about the error.
Figure 40. Failure Message

Control Plane Lockdown Mode

Enable or disable the Control Plane Lockdown Mode feature.
  1. A toggle button controls the configuration of this feature. Locate the corresponding card and click the toggle switch.
    Figure 41. Control Plane Lockdown Mode
  2. Click the Enable button to enable Control Plane Lockdown Mode or the Cancel button to discard the changes.
    Figure 42. Enable Control Plane Lockdown Mode
    Note: Changing the Control Plane Lockdown Mode may cause some service interruption during the transition.
  3. On enabling Control Plane Lockdown Mode, a success notification message pops up with specific details.
    Figure 43. Success Message

Timestamp Settings

  1. Control the configuration of this feature using the Edit icon by locating the corresponding card and clicking on the pencil icon.
    Figure 44. Timestamp Settings
  2. To configure different header modes, click on the drop-down arrow. There are two ways to edit the timestamp settings - Replace Source MAC or Add Header after L2.
    Figure 45. Edit Timestamp Settings
  3. For Add Header after L2 Mode, choose the header format as 48-bit or 64-bit.
    Figure 46. Add Header after L2
  4. Click the Submit button to confirm the configuration changes or the Cancel button to discard the changes.
    Figure 47. Submit Timestamp Changes
  5. Select Replace Source MAC Mode and Click the Submit button to confirm the configuration changes or the Cancel button to discard the changes.
    Figure 48. Replace Source MAC
  6. On enabling Timestamp Settings, a success notification message pops up with specific details.
    Figure 49. Success Message

Dashboard Layout

The dashboard data displays four tabs:
  • Controller Health
  • Switch Health
  • Policy Health
  • Smart Node Health
Each tab has health indicators for that category, and accessing the tab displays the relevant data below.
Figure 50. DANZ Monitoring Fabric (DMF) Controller Tabs
If a category contains errors or warnings, clicking on the message in the tab opens a details window. It displays the number of errors or warnings filtered by tab category.
Figure 51. Filtered by Category
Review errors by clicking the bell icon on the right side of the Navigation bar, and it will list all fabric errors and warnings instead of filtering by an individual tab.
Figure 52. Notification Bell

Controller Health

DANZ Monitoring Fabric (DMF) Interface Utilization

This widget displays the utilization of each DMF interface as follows:

  • DMF Interface Name
  • Interface Role
  • Traffic Direction
  • Current Utilization (%)
  • Peak Utilization (%)
Figure 53. DMF Interface Utilization

The bar indicates the current utilization and shows peak utilization with a vertical line. The color of the bar and percentage changes depending on the utilization:

  • Red means the utilization percentage is greater than 95%.
  • Yellow means the utilization percentage is greater than 70%.
  • Green means the utilization percentage is less than 70%.

Filter interfaces display only RX traffic, while delivery interfaces display only TX traffic. Other roles with bidirectional data can have one item for each direction of traffic, RX, or TX.

The Show All button leads to the DMF Interfaces page.

On hover, the bar shows the interface’s Bit Rate, Peak Bit Rate, and Speed in bits per second.
Figure 54. DMF Interface Utilization Hover Details

Sort interfaces by Interface Name or Current Utilization. The interfaces are sorted by current utilization (descending order) by default.

Display the interfaces by filtering using Role, Traffic Direction, and Current Utilization.
Figure 55. Sort Roles

Top DMF Interfaces by Traffic

This visualization displays each DMF interface's traffic (bit rate and packet rate).
Figure 56. Top DMF Interfaces by Traffic

The widget shows each interface's traffic direction, DMF Interface name, bit rate, and packet rate. The Show All button leads to the DMF Interfaces page. Sort interfaces by Bit Rate and filter by Metric and Role. By default, DMF sorts the data in descending order of bit rate.

On hover, the widget shows the DMF name, bit rate, and packet rate.
Figure 57. Top DMF Interfaces by Traffic Hover Details

Top Policies

The widget displays the top policies in DMF. For each policy, traffic is determined by totaling the traffic of each of its configured filter interfaces.
Figure 58. DMF Top Policies

For each policy, the bar chart shows the following:

  • Policy Name
  • The sum of the bit rates of all filter interfaces associated with the policy.
  • The sum of the packet rates of all filter interfaces associated with the policy.
On hover, the bar displays the policy name, bit rate, and packet rate.
Figure 59. DMF Policies Hover Details

Sort policies by Bit Rate and filter by Metric. By default, DMF sorts the policies in descending order of bit rate.

The Show All button leads to the Policies page.

Switch Health

Interface Usage Summary

This widget displays the usage statistics for all DANZ Monitoring Fabric (DMF) interfaces. The interface utilization groups all active interfaces:
  • Red means that the utilization percentage is greater than 95%.

  • Yellow means that the utilization percentage is greater than 70%.

  • Green means that the utilization percentage is less than 70%.

Figure 60. DMF Interface Usage Summary
There are three other categories for DMF Interfaces with no traffic. These appear beneath the Usage Bar:
  • Admin Shutdown
  • Link Down
  • Unknown - when Interface Speed is undefined or not known
  • Total Capacity Used displays with Total Capacity defined as the number of Active DMF Interfaces divided by the Number of Total DMF Interfaces
On hover, the number of interfaces in each category appears in the respective usage bar.
Figure 61. DMF Interface Usage Hover Details

Switch Usage Summary

This widget displays the usage statistics for each switch. All switches are grouped by:

  • Active (Green)
  • Admin Shutdown (Yellow)
  • Down (Red)
  • Quarantined (Grey)
Figure 62. DMF Switch Usage Summary
The total number of switches is displayed.

Three list items display the number of:

  • Switches Admin Shutdown
  • Switches Down
  • Switches Quarantined
On hover, the number of switches in each category appears in the respective usage bar.
Figure 63. DMF Switch Usage Hover Details

TCAM Usage Summary

This widget displays the usage statistics for the TCAM of each switch and groups all active TCAMs by usage:

  • Red means that the utilization percentage is greater than 95%.
  • Yellow means that the utilization percentage is greater than 70%.
  • Green means that the utilization percentage is less than 70%.
  • Grey means that the utilization is Unknown.
    • A switch is grouped in the Unknown category when no TCAM usage statistics are available, generally from a switch being shut down or disconnected.
Figure 64. DMF TCAM Usage Summary

The View Details link leads to the TCAM Utilization tab of the Switches page.

Below the Usage Bar, there are three list items displaying:
  • Switch Usage 71% - 95%
  • Switch Usage 96% - 100%
  • Unknown
On hover, the number of switches in each category appears in the respective usage bar.
Figure 65. DMF TCAM Utilization Hover Details

DMF Interface Utilization

DMF Interface Utilization is similar to the data displayed in the Controller Health tab. Please refer to its description for more information.

Switch Utilization

This widget contains two tabs:
  • Switch Usage
  • TCAM Usage

Switch Usage

The Switch Usage tab of the Switch Utilization displays essential information for each switch, including the use of each switch interface and alerts for any warnings or errors.
Figure 66. DMF Switch Usage Tab

 

The widget displays the following data for each switch:

  • Switch Name (contains a link to the Switches page for that specific switch).
  • Switch Usage: Each section represents the number of interfaces with a specific role.
  • Total Usage: Displays the Number of Interfaces with an assigned role divided by the Total Number of Interfaces on the switch.
  • Alerts: This column displays any alerts related to interfaces.
    • The yellow badge indicates the number of warnings, while the red badge shows the number of errors.
The Switch Usage column contains the number of interfaces for each role:
  • Filter
  • Delivery
  • Filter and Delivery
  • Core
  • Recorder Node
  • Service
  • PTP
  • MLAG Core
  • MLAG Delivery.
Three columns can sort the table:
  • Sort the Switch Name column in alphabetical order.
  • Sort the Total Usage column by percentage (%) usage (# used interfaces / # total interfaces).
  • Sort the Alerts column by the total number of alerts (# warnings + # errors).

The default sort order for this table is the Alerts column in descending order, which ensures the switches with the highest number of alerts are initially at the top.

On hover, the number of each interface appears.
Figure 67. DMF Switch Usage Filter Interfaces Hover Details

While hovering over the warnings or alerts badge, a table appears and displays Warnings for the yellow badge and Errors for the red badge, and it will also show the switch name.

Each row of the table contains the following:

  • Interface name (includes a link to Interfaces/[INTERFACE-NAME] page)
  • Interface role
  • Alert type (e.g., Down Delivery Interface)
Figure 68. DMF Interface Warnings
When data is unavailable for a switch (C1), there will be a yellow badge under the Switch Name that says Data Not Available. The Switch Usage column will have an empty usage bar, the Total Usage Column will show 0 (zero) for the number of currently used interfaces, and the Alerts column will be empty.
Figure 69. DMF Switch Usage - Data Not Available
When a switch is down, a red badge appears under the Switch Name that says Switch Not Connected. The other columns will be empty in the same way as the Data Not Available case.
Figure 70. DMF Switch Usage - Switch Not Connected

TCAM Usage

The TCAM Usage widget displays the current utilization of the TCAMs for each active switch. A switch can have a TCAM for IPv4, IPv6, or both. Each TCAM has a guaranteed maximum usage and current utilization. This table compares the current utilization of each TCAM to its guaranteed maximum.
Figure 71. DMF Usage (Policy Usage Only)

This widget displays a TCAM Usage chart for each switch:

  • The purple bar shows IPv4 Current Utilization and Guaranteed Maximum.
  • The cyan bar shows IPv6 Current Utilization and Guaranteed Maximum.
  • Each row will display Current Utilization (IPv4 + IPv6 Current Utilization)
  • Sort by Switch Name and Current Utilization.
  • Sort the Switch Name column alphabetically (descending and ascending).
  • Sort the Current Utilization column in descending and ascending order (IPv4 + IPv6 Current Utilization).
  • The default sort order for the table is the Current Utilization column in descending order, ensuring the switches with the highest current utilization display first.

Top DMF Interfaces by Traffic

The visualization shows DMF interface traffic (bit rate and packet rate) color-coded by interface role. The roles displayed are:
  • Core
  • Delivery
  • Filter
  • Filter and Delivery
  • MLAG Core
  • MLAG Delivery
  • Recorder Node
  • Service
Figure 72. DMF Top Interfaces by Traffic
For each interface, the chart item shows:
  • Interface role
  • Traffic direction
  • DMF interface name
  • Bit rate
  • Packet rate

The Show All button leads to the DMF Interfaces page.

Sort the interfaces by bit rate, which, by default, are sorted in descending bit rate order. Filter interfaces by interface role using the drop-down.
Figure 73. DMF Sort Interfaces by Bit Rate

Policy Health

Policies Usage by Traffic

This widget displays policy traffic. For each policy, the bar chart shows:

  • Name of the policy
  • Bit rate
  • Packet rate
Figure 74. DANZ Monitoring Fabric (DMF) Policies Usage by Traffic
On hover, similar information displays.

Sort policies by Bit Rate.

The Show All button leads to the Policies page.

Active Interfaces by Policy

The table displays DMF interfaces associated with policies. DMF interfaces that are not affiliated with a policy are not displayed.
Figure 75. DMF Active Interfaces by Policy

The table contains the following columns:

  • DMF Interface Name: The DMF name of the switch interface.
  • Role: The role of the interface.
  • Policy Name(s): A list of the policies associated with the interface.
  • Bit Rate: The bit rate of the interface.
  • Packet Rate: The packet rate of the interface.

The Show All button leads to the DMF Interfaces page.

Sort the table by each column; DMF sorts the items in descending bit rate order by default.

Two filters, Roles and Interfaces, allow data sorting by interface role and DMF interface name.
Figure 76. DMF Active Interfaces by Policy - Roles
Figure 77. DMF Active Interfaces by Policy - Interfaces

Smart Node Health

Recorder Node

The Recorder Nodes table displays Recorder Node health and the following columns:
  • Recorder Node Name
  • IP Address
  • MAC Address
  • Recording
    • Indicates the status of the Recorder Node recording configuration, either Yes or No.
  • Storage Utilization
  • Index and Packet disk storage utilization % (percentage) using the following colors:
    • Red means the utilization percentage is greater than 95%.
    • Yellow means the utilization percentage is greater than 70%.
    • Green means the utilization percentage is less than 70%.
Figure 78. Recorder Nodes
On hover, various details appear depending on the column selected. These include:
  • Free and Total Disk Usage
  • Backup Storage Utilization

  • Index and Packet backup disk storage utilization % (percentage) using the following colors.

    • Red means the utilization percentage is greater than 95%.

    • Yellow means the utilization percentage is greater than 70%.

    • Green means the utilization percentage is less than 70%.

  • Virtual Disk Health

  • Status of Index and Packet virtual disks:

    • Green means the virtual disk’s health is good.

    • Red means the value of the virtual disk’s health is bad.

  • Recorder Node Fabric Interface

    • Shows the DMF interface name and its status where the Recorder Node connects to the DMF Fabric.

  • Switch, Interface, and status
  • Zero Touch State

  • Alerts

  • Errors and warnings for the Recorder Node - Hovering over an error displays additional information about the errors and warnings.

The following are examples of detailed information when hovering.
Figure 79. Example - Index Disk Storage
Figure 80. Example - Index Backup Disk Storage
Figure 81. Example - Recorder Node Fabric Interface

 

Figure 82. Example - Errors

The View All link leads to the Recorder Node page.

Service Node

The Service Nodes table displays Service Node health and the following columns:
  • Service Node Name
  • IP Address
  • Service Node Interface Load
  • Zero Touch State
Figure 83. DANZ Monitoring Fabric (DMF) Service Nodes
Hovering over the Service Node Interface Load column displays:
  • Interface Name
  • Service Name
  • Action
Figure 84. DMF Service Nodes Hover Details

The View All link leads to the Service Node page.

Analytics Node

The Analytics Node table displays Analytics Node health and the following columns:
  • IP Address: The configured Analytics Node IP address.
    • Clicking on the IP Address opens the Analytics Node UI.
  • Redis Status
    • Displays the status in green if healthy, along with the last updated timestamp.
    • Displays the status in red if unhealthy, along with the latest updated timestamp.
  • Replicated Redis Status
    • Displays the status in green if healthy, along with the latest updated timestamp.
    • Displays the status in red if unhealthy, along with the latest updated timestamp
Figure 85. DMF Analytics Node

The View Details link leads to the Analytics Node details page.

Refreshing Data

Data automatically refreshes every minute, and interface topology data automatically refreshes every 10 seconds.

Manually refresh dashboard data using the Refresh button.

Empty State

When there are no provisioned switches, DANZ Monitoring Fabric (DMF) Interface Utilization and Top DMF Interfaces by Traffic will display an Empty Component.

Each empty component contains a link to provision a switch. The system prompts the user to create a DMF interface if there are provisioned switches but no assigned DMF interfaces.
Figure 86. DMF Controller Overview - Empty State

Top Policies will display an Empty Component if no policies exist.

Use the Create Policy button to go to the Create Policy page.
Figure 87. DMF Switch Health - Empty State
The Usage Summary components will display Unused or No Switches Connected for the usage bar legend.
Figure 88. DMF Policy Health - Empty State

Policies Usage by Traffic displays the same Empty Component as Top Policies.

*sFlow® is a registered trademark of Inmon Corp.