Advanced Fabric Settings

This chapter describes fabric-wide configuration options required in advanced use cases for deploying DMF policies.

Configuring Advanced Fabric Settings

Overview

Before the DMF 8.4 release, the fabric-wide settings, specifically the Features section, as shown below, were available on the home page after logging in.
Figure 1. DMF Legacy Page (pre 8.4)
Beginning with DMF 8.4, a newly designed Dashboard replaces the former home page. The Features section is now the new DMF Features page. To navigate the DMF Features Page, click on the gear icon in the navigation bar.
Figure 2. Gear Icon

Page Layout

All fabric-wide configuration settings required in advanced use cases for deploying DMF policies appear in the new DMF Features Page.

Figure 3. DMF Features Page
Each card on the page corresponds to a feature set.
Figure 4. Feature Set Card Example
The UI displays the following:
  • Feature Title
  • A brief description
  • View / Hide detailed information link
  • Current Setting
  • Edit Link - Use the Edit configuration button (pencil icon) to change the value.

The fabric-wide options used with DMF policies include the following:

Table 1. Feature Set
Auto VLAN Mode Auto VLAN Range
Auto VLAN Strip CRC Check
Custom Priority Device Deployment Mode
Inport Mask Match Mode
Policy Overlap Limit Policy Overlap Limit Strict
PTP Timestamping Retain User Policy VLAN
Tunneling VLAN Preservation

Managing VLAN Tags in the Monitoring Fabric

Analysis tools often use VLAN tags to identify the filter interface receiving traffic. How VLAN IDs are assigned to traffic depends on which auto-VLAN mode is enabled. The system automatically assigns the VLAN ID from a configurable range of VLAN IDs, from 1 to 4094 by default. Available auto-VLAN modes behave as follows:
  • push-per-policy (default): Automatically adds a unique VLAN ID to all traffic selected by a specific policy. This setting enables tag-based forwarding.
  • push-per-filter: Automatically adds a unique VLAN ID from the default auto-VLAN range (1-4094) to each filter interface. A custom VLAN range can be specified using the auto-vlan-range command. Manually assign any VLAN ID not in the auto-VLAN range to a filter interface.

The VLAN ID assigned to policies or filter interfaces remains unchanged after controller reboot or failover. However, it changes if the policy is removed and added back again. Also, when the VLAN range is changed, existing assignments are discarded, and new assignments are made.

The push-per-filter feature preserves the original VLAN tag, but the outer VLAN tag is rewritten with the assigned VLAN ID if the packet already has two VLAN tags.

The following table summarizes how VLAN tagging occurs with the different auto-VLAN modes:
Table 2. VLAN Tagging Across VLAN Modes
Traffic with VLAN tag type push-per-policy Mode (Applies to all supported switches) push-per-filter Mode (Applies to all supported switches)
Untagged Pushes a single tag Pushes a single tag
Single tag Pushes an outer (second) tag Pushes an outer (second) tag
Two tags Pushes an outer (third) tag. Except on T3-based switches, it rewrites the outer tag. Due to this outer customer VLAN is replaced by DMF policy VLAN. Rewrites the outer tag. Due to this outer customer VLAN is replaced by DMF filter VLAN.
Note: When enabling push-per-policy, the auto-delivery-interface-vlan-strip feature is enabled (if disabled) before enabling push-per-policy. When enabling push-per-filter, the global delivery strip option is not enabled if previously disabled.
The following table summarizes how different auto-VLAN modes affect the applications and services supported.
Note: Matching on untagged packets cannot be applied to DMF policies when in push-per-policy mode.
Table 3. Auto-VLAN Mode Comparison
Auto-VLAN Mode Supported Platform TCAM Optimization in the Core L2 GRE Tunnels Support Q-in-Q Packets Preserve Both Original Tags Support DMF Service Node Services Manual Tag to Filter Interface
Push-per-policy (default) All Yes Yes Yes All Policy tag overwrites manual
Push-per-filter All No Yes No All Configuration not allowed
Note: Tunneling is supported with full-match or offset-match modes but not with l3-l4-match mode.

Tag-based forwarding, which improves traffic forwarding and reduces TCAM utilization on the monitoring fabric switches, is enabled only when you choose the push-per-policy option.

When the mode is push-per-filter, the VLAN that is getting pushed or rewritten can be displayed using the show interface-names command as shown below:
controller-1> show interface-names
~~~~~~~~~~~~~~~~~~~~~~~~ Filter Interface(s) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#DMF IFSwitch IF Name Dir State SpeedVLAN Tag Analytics Ip address Connected Device
--|---------------------------|-------------|------------|---|-----|------|--------|---------|----------|----------------|
1TAP-PORT-eth1 FILTER-SW1ethernet1rxup10Gbps 5True
2TAP-PORT-eth10FILTER-SW1ethernet10 rxup10Gbps 10 True
3TAP-PORT-eth12FILTER-SW1ethernet12 rxup10Gbps 11 True
4TAP-PORT-eth14FILTER-SW1ethernet14 rxup10Gbps 12 True
5TAP-PORT-eth16FILTER-SW1ethernet16 rxup10Gbps 13 True
6TAP-PORT-eth18FILTER-SW1ethernet18 rxup10Gbps 14 True
7TAP-PORT-eth20FILTER-SW1ethernet20 rxup10Gbps 16 True
8TAP-PORT-eth22FILTER-SW1ethernet22 rxup10Gbps 17 True

Auto VLAN Mode

Analysis tools often use VLAN tags to identify the filter interface receiving traffic. How VLAN IDs are assigned to traffic depends on which auto-VLAN mode is enabled. The system automatically assigns the VLAN ID from a configurable range of VLAN IDs from 1 to 4094 by default. Available auto-VLAN modes behave as follows:

  • Push per Policy (default): Automatically adds a unique VLAN ID to all traffic selected by a specific policy. This setting enables tag-based forwarding.
  • Push per Filter: Automatically adds a unique VLAN ID from the default auto-vlan-range (1-4094) to each filter interface. A new vlan range can be specified using the auto-vlan-range command. Manually assign any VLAN ID not in the auto-VLAN range to a filter interface.

The following table summarizes how VLAN tagging occurs with the different Auto VLAN modes.

Traffic with VLAN tag type push-per-policy Mode

(Applies to all supported switches)

push-per-filter Mode

(Applies to all supported switches)

Untagged Pushes a single tag Pushes a single tag
Single tag Pushes an outer (second) tag Pushes an outer (second) tag
Two tags Pushes an outer (third) tag. Except on T3-based switches, it rewrites the outer tag. Due to this outer customer VLAN is replaced by DMF policy VLAN. Rewrites the outer tag. Due to this outer customer VLAN is replaced by DMF filter VLAN.
Note: When enabling push-per-policy, the auto-delivery-interface-vlan-strip feature is enabled (if disabled) before enabling push-per-policy. When enabling push-per-filter, the global delivery strip option is not enabled if previously disabled.

The following table summarizes how different Auto VLAN modes affect supported applications and services.

Note: Matching on untagged packets cannot be applied to DMF policies when in push-per-policy mode.
Auto-VLAN Mode Supported Platform TCAM Optimization in the Core L2 GRE Tunnels Support Q-in-Q Packets Preserve Both Original Tags Supported DMF Service Node Services Manual Tag to Filter Interface
Push per Policy (default) All Yes Yes Yes All Policy tag overwrites manual
Push per Filter All No Yes No All Configuration not allowed

Tag-based forwarding, which improves traffic forwarding and reduces TCAM utilization on the monitoring fabric switches, is only enabled when choosing the push-per-policy option.

Use the CLI or the GUI to configure Auto VLAN Mode as described in the following topics.

Configuring Auto VLAN Mode using the CLI

To set the auto VLAN mode, complete the following steps:

  1. When setting the auto VLAN mode to push-per-filter, define the range of automatically assigned VLAN IDs by entering the following command from config mode:
    auto-vlan-range vlan-min <start> vlan-max <end>
    Replace start and end with the first and last VLAN ID in the range. For example, the following command assigns a range of 100 VLAN IDs from 3994 to 4094:
    controller-1(config)# auto-vlan-range vlan-min 3994 vlan-max 4094
  2. Select the VLAN mode using the following command from config mode:
    auto-vlan-mode command { push-per-filter | push-per-policy }

    Find details of the impact of these options in the Managing VLAN Tags in the Monitoring Fabric section.

    For example, the following command adds a unique outer VLAN tag to each packet received on each filter interface:
    controller-1(config)# auto-vlan-mode push-per-filter
    Switching to auto vlan mode would cause policies to be re-installed. Enter "yes" (or "y")
    to continue: y
  3. To display the configured VLAN mode, enter the show fabric command, as in the following example:
    controller-1# show fabric
    ~~~~~~~~~~~~~~~~~~~~~ Aggregate Network State ~~~~~~~~~~~~~~~~~~~~~
    Number of switches: 5
    Inport masking: True
    Start time: 2018-11-02 23:42:29.183000 UTC
    Number of unmanaged services: 0
    Filter efficiency : 1:1
    Number of switches with service interfaces: 0
    Total delivery traffic (bps): 411Kbps
    Number of managed service instances : 0
    Number of service interfaces: 0
    Match mode: full-match
    Number of delivery interfaces : 13
    Max pre-service BW (bps): -
    Auto VLAN mode: push-per-filter
    Number of switches with delivery interfaces : 4
    Number of managed devices : 1
    Uptime: 2 days, 19 hours
    Total ingress traffic (bps) : 550Kbps
    Max overlap policies (0=disable): 10
    Auto Delivery Interface Strip VLAN: False
    Number of core interfaces : 219
    Max filter BW (bps) : 184Gbps
    Number of switches with filter interfaces : 5
    State : Enabled
    Max delivery BW (bps) : 53Gbps
    Total pre-service traffic (bps) : -
    Track hosts : True
    Number of filter interfaces : 23
    Number of active policies : 3
    Number of policies: 25
    ------------------------output truncated------------------------
  4. To display the VLAN IDs assigned to each policy, enter the show policy command, as in the following example:
    controller-1> show policy
    # Policy Name Action Runtime Status Type Priority Overlap Priority
    ˓→Push VLAN Filter BW Delivery BW Post Match Filter Traffic Delivery Traffic Services
    -|--------------------------|-------|--------------|----------|--------|----------------|--
    ˓→-------|---------|-----------|-------------------------|----------------|----------------
    ˓→-----|
    1 GENERATE-NETFLOW-RECORDS forward installed Configured 100 0 4
    ˓→ 100Gbps 10Gbps - - DMF-OOB-NETFLOWSERVICE
    2 P1 forward inactive Configured 100 0 1
    ˓→ - 1Gbps - -
    3 P2 forward inactive Configured 100 0 3
    ˓→ - 10Gbps - -
    4 TAP-WINDOWS10-NETWORK forward inactive Configured 100 0 2
    ˓→ 21Gbps 1Gbps - -
    5 TIMESTAMP-INCOMING-PACKETS forward inactive Configured 100 0 5
    ˓→ - 100Gbps - - DMF-OOB-TIMESTAMPINGSERVICE
    controller -1>)#
Note: The strip VLAN option, when enabled, removes the outer VLAN tag, including the VLAN ID applied by any rewrite VLAN option.

Configuring Auto VLAN Mode using the GUI

Auto VLAN Mode

  1. Control the configuration of this feature using the Edit icon by locating the corresponding card and clicking on the pencil icon.
    Figure 5. Auto VLAN Mode Config
  2. A confirmation edit dialogue window appears, displaying the corresponding prompt message.
    Figure 6. Edit VLAN Mode
  3. To configure different modes, click the drop-down arrow to open the menu.
    Figure 7. Drop-down Example
  4. From the drop-down menu, select and click on the desired mode.
    Figure 8. Push Per Policy
  5. Alternatively, you can directly input the desired mode name in the input area.
    Figure 9. Push Per Policy
  6. Click the Submit button to confirm the configuration changes or the Cancel button to discard the changes.
    Figure 10. Submit Button
  7. After successfully setting the configuration, the current configuration status displays next to the edit button.
    Figure 11. Current Configuration Status

The following feature sets work in the same manner as the Auto VLAN Mode feature described above.

  • Device Deployment Mode
  • Match Mode

Auto VLAN Range

Auto VLAN Range

The range of automatically generated VLANs only applies when setting Auto VLAN Mode to push-per-filter. VLANs are picked from the range 1 - 4094 when not specified.

  1. Control the configuration of this feature using the Edit icon by locating the corresponding card and clicking on the pencil icon.
    Figure 12. Edit Auto VLAN Range
  2. A configuration edit dialogue window pops up, displaying the corresponding prompt message. The Auto VLAN Range defaults to 1 - 4094.
    Figure 13. Edit Auto VLAN Range
  3. Click on the Custom button to configure the custom range.
    Figure 14. Custom Button
  4. Adjust range value (minimum value: 1, maximum value: 4094). There are three ways to adjust the value of a range:
    • Directly enter the desired value in the input area, with the left side representing the minimum value of the range and the right side representing the maximum value.
    • Adjust the value by dragging the slider using a mouse. The left knob represents the minimum value of the range, while the right knob represents the maximum value.
    • Use the up and down arrow buttons in the input area to adjust the value accordingly. Pressing the up arrow increments the value by 1, while pressing the down arrow decrements it by 1.
  5. Click the Submit button to confirm the configuration changes or the Cancel button to discard the changes.
  6. After successfully setting the configuration, the current configuration status displays next to the edit button.
    Figure 15. Configuration Change Success

Configuring Auto VLAN Range using the CLI

To set the Auto VLAN Range, complete the following:

auto-vlan-range vlan-min start vlan-max end

To set the Auto VLAN Range, complete the following steps. Replace start and end with the first and last VLAN ID in the desired range. For example, the following command assigns a range of 100 VLAN IDs from 3994 to 4094:

controller-1(config)# auto-vlan-range vlan-min 3994 vlan-max 4094

Auto VLAN Strip

The strip VLAN option removes the outer VLAN tag before forwarding the packet to a delivery interface. Only the outer tag is removed if the packet has two VLAN tags. If it has no VLAN tag, the packet is not modified. Users can remove the VLAN ID on traffic forwarded to a specific delivery interface globally for all delivery interfaces. The strip VLAN option removes any VLAN ID applied by the rewrite VLAN option.

The strip vlan option removes the VLAN ID on traffic forwarded to the delivery interface. The following are the two methods available:

  • Remove VLAN IDs fabric-wide for all delivery interfaces. This method removes only the VLAN tag added by DMF Fabric.
  • On specific delivery interfaces. This method has four options:
    • Keep all tags intact. Preserves the VLAN tag added by DMF Fabric and other tags in the traffic using strip-no-vlan option during delivery interface configuration.
    • Remove only the outer VLAN tag the DANZ Monitoring Fabric added using the strip-one-vlan option during delivery interface configuration.
    • Remove only the second (inner) tag. Preserves the VLAN (outer) tag added by DMF Fabric and removes the second (inner) tag in the traffic using the strip-second-vlan option during delivery interface configuration.
    • Remove two tags. Removes the outer VLAN tag added by DMF fabric and inner vlan tag in the traffic using the strip-two-vlan option during delivery interface configuration.
Note: The strip vlan command for a specific delivery interface overrides the fabric-wide strip vlan option.

By default, the VLAN ID is stripped when DMF adds it to enable the following options:

  • Push per Policy
  • Push per Filter
  • Rewrite VLAN under filter-interfaces

Tagging and stripping VLANs as they ingress and egress DMF differs depending on whether the switch is a Trident 3-based.

Use the CLI or the GUI to configure Auto VLAN Strip as described in the following topics.

Auto VLAN Strip using the CLI

The strip VLAN option removes the outer VLAN tag before forwarding the packet to a delivery interface. Only the outer tag is removed if the packet has two VLAN tags. If it has no VLAN tag, the packet is not modified. Users can remove the VLAN ID on traffic forwarded to a specific delivery interface or globally for all delivery interfaces. The strip VLAN option removes any VLAN ID applied by the rewrite VLAN option.

The following are the two methods available:

  • Remove VLAN IDs fabric-wide for all delivery interfaces: This method removes only the VLAN tag added by the DMF Fabric.
  • Remove VLAN IDs only on specific delivery interfaces: This method has four options:
    • Keep all tags intact. Preserves the VLAN tag added by DMF Fabric and other tags in the traffic using strip-no-vlan option during delivery interface configuration.
    • Remove only the outer VLAN tag the DANZ Monitoring Fabric added using the strip-one-vlan option during delivery interface configuration.
    • Remove only the second (inner) tag. Preserves the VLAN (outer) tag added by DMF and removes the second (inner) tag in the traffic using the strip-second-vlan option during delivery interface configuration.
    • Remove two tags. Removes the outer VLAN tag added by DMF fabric and the inner VLAN tag in the traffic using the strip-two-vlan option during delivery interface configuration.
Note: The strip vlan command for a specific delivery interface overrides the fabric-wide strip VLAN option.
By default, the VLAN ID is stripped when DMF adds it as a result of enabling the following options:
  • push-per-policy
  • push-per-filter
  • rewrite vlan under filter-interfaces

To view the current auto-delivery-interface-vlan-strip configuration, enter the following command:

controller-1> show running-config feature details
! deployment-mode
deployment-mode pre-configured
! auto-delivery-interface-vlan-strip
auto-delivery-interface-vlan-strip
! auto-vlan-mode
auto-vlan-mode push-per-policy
! auto-vlan-range
auto-vlan-range vlan-min 3200 vlan-max 4094
! crc
crc
! match-mode
match-mode full-match
! tunneling
tunneling
! allow-custom-priority
allow-custom-priority
! inport-mask
no inport-mask
! overlap-limit-strict
no overlap-limit-strict
! overlap-policy-limit
overlap-policy-limit 10
! packet-capture
packet-capture retention-days 7

To view the current auto-delivery-interface-vlan-strip state, enter the following command:

controller-1> show fabric
~~~~~~~~~~~~~~~~~~~~~ Aggregate Network State ~~~~~~~~~~~~~~~~~~~~~
Number of switches : 5
Inport masking : True
Start time : 2018-10-16 22:30:03.345000 UTC
Number of unmanaged services : 0
Filter efficiency : 3005:1
Number of switches with service interfaces : 0
Total delivery traffic (bps) : 232bps
Number of managed service instances : 0
Number of service interfaces : 0
Match mode : l3-l4-match
Number of delivery interfaces : 24
Max pre-service BW (bps) : -
Auto VLAN mode : push-per-policy
Number of switches with delivery interfaces : 5
Number of managed devices : 1
Uptime : 21 hours, 53 minutes
Total ingress traffic (bps) : 697Kbps
Max overlap policies (0=disable) : 10
Auto Delivery Interface Strip VLAN : True
To disable this global command, enter the following command:
controller-1(config-switch-if)# no auto-delivery-interface-vlan-strip

The delivery interface level command to strip the VLAN overrides the global auto-delivery-interface-vlan-strip command. For example, when global VLAN stripping is disabled or when you want to override the default strip option on a delivery interface use the below options:

When you want to strip the VLAN added by DMF fabric on a specific delivery interface, use the following command:
controller-1(config-switch-if)# role delivery interface-name TOOL-PORT-1 strip-one-vlan
When global VLAN stripping is enabled, it strips only the outer VLAN ID. If you want to remove outer VLAN ID that was added by DMF as well as the inner VLAN ID, enter the following command:
controller-1(config-switch-if)# role delivery interface-name TOOL-PORT-1 strip-two-vlan
To strip only the inner VLAN ID and preserve the outer VLAN ID that DMF added, use the following command:
controller-1(config-switch-if)# role delivery interface-name TOOL-PORT-1 strip-second-vlan
To preserve the VLAN tag added by DMF and other tags in the traffic, use the following command:
controller-1(config-switch-if)# role delivery interface-name TOOL-PORT-1 strip-no-vlan
Note:For all modes VLAN ID stripping is supported at both the global and delivery interface levels. The rewrite-per-policy and rewrite-per-filter options have been removed in DMF Release 6.0 because the push-per-policy and push-per-filter options now support the related use cases.
The syntax for the strip VLAN ID feature is as follows:
controller-1(config-switch-if)# role delivery interface-name <name> [strip-no-vlan | strip-onevlan | strip-second-vlan | strip-two-vlan]

You can use the option to leave all VLAN tags intact, remove the outermost tag, remove the second (inner) tag, or remove the outermost two tags.

By default, VLAN stripping is enabled and the outer VLAN added by DMF is removed.

To preserve the outer VLAN tag, enter the strip-no-vlan command, as in the following example, which preserves the outer VLAN ID for traffic forwarded to the delivery interface TOOL-PORT-1:
controller-1(config-switch-if)# role delivery interface-name TOOL-PORT-1 strip-no-vlan
When global VLAN stripping is disabled, the following commands remove the outer VLAN tag, added by DMF, on packets transmitted to the specific delivery interface ethernet20 on DMF-DELIVERY-SWITCH-1:
controller-1(config)# switch DMF-DELIVERY-SWITCH-1
controller-1(config-switch)# interface ethernet20
controller-1(config-switch-if)# role delivery interface-name TOOL-PORT-1 strip-one-vlan
To restore the default configuration, which is to strip the VLAN IDs from traffic to every delivery interface, enter the following command:
controller-1(config)# auto-delivery-interface-vlan-strip
This would enable auto delivery interface strip VLAN feature. 
Existing policies will be re-computed. Enter “yes” (or “y”) to continue: yes

As mentioned earlier, tagging and stripping VLANs as they ingress and egress DMF differs based on whether the switch uses a Trident 3 chipset. The following scenarios show how DMF behaves in different VLAN modes with various knobs set.

Scenario 1

  • VLAN mode: Push per Policy
  • Filter interface on any switch except a Trident 3 switch
  • Delivery interface on any switch
  • Global VLAN stripping is enabled
Table 4. Behavior of Traffic as It Egresses with Different Strip Options on a Delivery Interface
VLAN tag type No Configuration strip-no-VLAN strip-one-VLAN strip-second-VLAN strip-two-VLAN
  DMF policy VLAN is stripped automatically on delivery inter- face using default global strip VLAN added by DMF DMF policy VLAN and customer VLAN preserved Strips the outermost VLAN that is DMF policy VLAN DMF policy VLAN is preserved and outermost customer VLAN is removed Strip two VLANs, DMF policy VLAN and customer outer VLAN removed
Untagged Packets exit DMF as untagged packets Packets exit DMF as singly tagged packets. VLAN in the packet is DMF policy VLAN. Packets exit DMF as untagged packets. Packets exit DMF as single-tagged traffic. VLAN in the packet is DMF policy VLAN. Packets exit DMF as untagged traffic.
Singly Tagged Packets exit DMF as single-tagged traffic with customer VLAN. Packets exit DMF as doubly tagged packets. Outer VLAN in the packet is DMF policy VLAN. Packets exit DMF as single-tagged traffic with customer VLAN. Packets exit DMF as single-tagged traffic. VLAN in the packet is DMF policy VLAN. Packets exit DMF as untagged traffic.
Doubly Tagged Packet exits DMF as doubly tagged traffic. Both VLANs are customer VLANs. Packet exits DMF as triple-tagged packets. Outermost VLAN in the packet is the DMF policy VLAN. Packet exits DMF as doubly tagged traffic. Both VLANs are customer VLANs. Packet exits DMF as double-tagged packets. Outer VLAN is DMF policy VLAN, inner VLAN is inner customer VLAN in the original packet. Packet exits DMF as singly tagged traffic. VLAN in the packet is the inner customer VLAN.

Scenario 2

  • VLAN Mode: Push per Policy
  • Filter interface on any switch except a Trident 3 switch
  • Delivery interface on any switch
  • Global VLAN strip is disabled
Table 5. Behavior of Traffic as It Egresses with Different Strip Options on a Delivery Interface
VLAN tag type No Configuration strip-no-VLAN strip-one-VLAN strip-second-VLAN strip-two-VLANs
  DMF policy VLAN and customer VLAN are preserved DMF policy VLAN and customer VLAN are preserved Strips only the outermost VLAN that is DMF policy VLAN DMF policy VLAN is preserved and outer most customer VLAN is removed Strip two VLANs, DMF policy VLAN and customer outer VLAN removed
Untagged Packet exits DMF as singly tagged packets. VLAN in the packet is DMF policy VLAN. Packet exits DMF as singly tagged packets. VLAN in the packet is DMF policy VLAN. Packet exits DMF as untagged packets. Packet exits DMF as single-tagged traffic. VLAN in the packet is DMF policy VLAN. Packet exits DMF as untagged traffic.
Singly Tagged Packet exits DMF as doubly tagged packets. Outer VLAN in packet is DMF policy VLAN and inner VLAN is customer outer VLAN. Packet exits DMF as doubly tagged packets. Outer VLAN in the packet is DMF policy VLAN. Packet exits DMF as single-tagged traffic with customer VLAN. Packet exits DMF as single-tagged traffic. VLAN in the packet is DMF policy VLAN. Packets exits DMF as untagged traffic.
Doubly Tagged Packet exits DMF as triple-tagged packets. Outermost VLAN in the packet is the DMF policy VLAN. Packet exits DMF as triple-tagged packets. Outermost VLAN in the packet is the DMF policy VLAN. Packet exits DMF as doubly tagged traffic. Both VLANs are customer VLANs. Packet exits DMF as doubly tagged packets. Outer VLAN is DMF policy VLAN, inner VLAN is inner customer VLAN in the original packet. Packet exits DMF as singly tagged traffic. VLAN in the packets is the inner customer VLAN.

Scenario 3

  • VLAN Mode - Push per Policy
  • Filter interface on a Trident 3 switch
  • Delivery interface on any switch
  • Global VLAN strip is enabled
Table 6. Behavior of traffic as it egresses with different strip options on a delivery interface
VLAN tag type No Configuration strip-no-VLAN strip-one-VLAN strip-second-VLAN strip-two-VLAN
  DMF policy VLAN is stripped automatically on delivery interface using default global strip VLAN added by DMF DMF policy VLAN and customer VLAN preserved Strips the outermost VLAN that is DMF policy VLAN DMF policy VLAN is preserved and outermost customer VLAN is removed Strip two VLANs , DMF policy VLAN and customer outer VLAN removed
Untagged Packet exits DMF as untagged packets. Packet exits DMF as singly tagged packets. VLAN in the packet is DMF policy VLAN. Packet exits DMF as untagged packets. Packet exits DMF as single-tagged traffic. VLAN in the packet is DMF policy VLAN. Packet exits DMF as untagged traffic.
Singly Tagged Packet exits DMF as single-tagged traffic with customer VLAN. Packet exits DMF as doubly tagged packets. Outer VLAN in the packet is DMF policy VLAN. Packet exits DMF as single tagged traffic with customer VLAN. Packet exits DMF as single-tagged traffic. VLAN in the packet is DMF policy VLAN. Packet exits DMF as untagged traffic.
Doubly Tagged Packet exits DMF as singly tagged traffic. VLAN in the packet is the inner customer VLAN Packet exits DMF as doubly tagged traffic. Outer customer VLAN is replaced by DMF policy VLAN. Packet exits DMF as singly tagged traffic. VLAN in the packet is the inner customer VLAN. Packet exits DMF as singly tagged traffic. VLAN in the packet is the DMF policy VLAN. Packet exits DMF as untagged traffic.

Scenario 4

  • VLAN Mode - Push per Filter
  • Filter interface on any switch
  • Delivery interface on any switch
  • Global VLAN strip is enabled
Table 7. Behavior of Traffic as It Egresses with Different Strip Options on a Delivery Interface
VLAN tag type No Configuration strip-no-VLAN strip-one-VLAN strip-second-VLAN strip-two-VLAN
  DMF filter VLAN is stripped automatically on delivery interface using global strip VLAN added by DMF. DMF filter VLAN and customer VLAN preserved. Strips the outermost VLAN that is DMF filter VLAN. DMF filter VLAN is preserved and outermost customer VLAN is removed. Strip two VLANs, DMF filter interface VLAN and customer outer VLAN removed.
Untagged Packet exits DMF as untagged packets. Packet exits DMF as singly tagged packets. VLAN in the packet is DMF filter interface VLAN. Packet exits DMF as untagged packets.

Packet exits DMF

as single tagged traffic. VLAN in the packet is DMF filter inter- face VLAN.

Packet exits DMF as untagged traffic.
Singly Tagged Packet exits DMF as singly tagged traffic. VLAN in the packet is the customer VLAN. Packet exits DMF as doubly tagged packets. Outer VLAN in the packet is DMF filter interface VLAN. Packet exits DMF as singly tagged traffic. VLAN in the packet is the customer VLAN. Packet exits DMF as singly tagged traffic. VLAN in the packet is DMF filter interface VLAN. Packet exits DMF as untagged traffic.
Doubly Tagged Packet exits DMF as singly tagged traffic. VLAN in the policy is the inner customer VLAN. Packet exits DMF as doubly tagged traffic. Outer customer VLAN is replaced by DMF filter interface VLAN. Packet exits DMF as singly tagged traffic. VLAN in the policy is the inner customer VLAN. Packet exits DMF as singly tagged traffic. VLAN in the policy is the DMF filter interface VLAN. Packet exits DMF as untagged traffic.

Scenario 5

  • VLAN Mode - Push per Filter
  • Filter interface on any switch
  • Delivery interface on any switch
  • Global VLAN strip is disabled
Table 8. Behavior of Traffic as It Egresses with Different Strip Options on a Delivery Interface
VLAN tag type No Configuration strip-no-VLAN strip-one-VLAN strip-second-VLAN strip-two-VLAN
  DMF filter VLAN is stripped automatically on delivery interface using global strip VLAN added by DMF. DMF filter VLAN and customer VLAN preserved. Strips the outermost VLAN that is DMF filter VLAN. DMF filter VLAN is preserved and outermost customer VLAN is removed. Strip two VLANs, DMF filter interface VLAN and customer outer VLAN removed.
Untagged Packet exits DMF as singly tagged packets. VLAN in the packet is DMF filter interface VLAN. Packet exits DMF as singly tagged packets. VLAN in the packet is DMF filter interface VLAN. Packet exits DMF as untagged packets. Packet exits DMF as singly tagged traffic. VLAN in the packet is DMF filter interface VLAN. Packet exits DMF as untagged traffic.
Singly Tagged Packet exits DMF as doubly tagged traffic. Outer VLAN in the packet is DMF filter VLAN and inner VLAN is the customer VLAN. Packet exits DMF as doubly tagged packets. Outer VLAN in the packet is DMF filter interface VLAN. Packet exits DMF as single tagged traffic. VLAN in the packet is the customer VLAN. Packet exits DMF as singly tagged traffic. VLAN in the packet is DMF filter interface VLAN. Packet exits DMF as untagged traffic.
Doubly Tagged Packet exits DMF as doubly tagged traffic. Outer customer VLAN is replaced by DMF filter interface VLAN. Packet exits DMF as doubly tagged traffic. Outer customer VLAN is replaced by DMF filter interface VLAN. Packet exits DMF as singly tagged traffic. VLAN in the policy is the inner customer VLAN. Packet exits DMF as singly tagged traffic. VLAN in the policy is the DMF filter interface VLAN. Packet exits DMF as untagged traffic.

Auto VLAN Strip using the GUI

Auto VLAN Strip

  1. A toggle button controls the configuration of this feature. Locate the corresponding card and click the toggle switch.
    Figure 16. Toggle Switch
  2. A confirm window pops up, displaying the corresponding prompt message. Click the Enable button to confirm the configuration changes or the Cancel button to cancel the configuration. Conversely, to disable the configuration, click Disable.
    Figure 17. Confirm / Enable
  3. Review any warning messages that appear in the confirmation window during the configuration process.
    Figure 18. Warning Message - Changing

The following feature sets work in the same manner as the Auto VLAN Strip feature described above.

  • CRC Check
  • Custom Priority
  • Inport Mask
  • Policy Overlap Limit Strict
  • Retain User Policy VLAN
  • Tunneling

CRC Check

If the Switch CRC option is enabled, which is the default, each DMF switch drops incoming packets that enter the fabric with a CRC error. The switch generates a new CRC if the incoming packet was modified using an option that modifies the original CRC checksum, which includes the push VLAN, rewrite VLAN, strip VLAN, and L2 GRE tunnel options.

Note: Enable the Switch CRC option to use the DMF tunneling feature.

If the Switch CRC option is disabled, DMF switches do not check the CRC of incoming packets and do not drop packets with CRC errors. Also, switches do not generate a new CRC if the packet is modified. This mode is helpful if packets with CRC errors need to be delivered to a destination tool unmodified for analysis. When disabling the Switch CRC option, ensure the destination tool does not drop packets having CRC errors. Also, recognize that CRC errors will be caused by modification of packets by DMF options so that these CRC errors are not mistaken for CRC errors from the traffic source.

Note: When the Switch CRC option is disabled, packets going to the Service Node or Recorder Node are dropped because a new CRC is not calculated when push-per-policy or push-per-filter adds a VLAN tag.

Enable and disable CRC Check using the steps described in the following topics.

CRC Check using the CLI

If the Switch CRC option is enabled, which is the default, each DMF switch drops incoming packets that enter the fabric with a CRC error. The switch generates a new CRC if the incoming packet was modified using one option that modifies the original CRC checksum, which includes the push VLAN, rewrite VLAN, strip VLAN, and L2 GRE tunnel options.
Note: Enable the Switch CRC option to use the DMF tunneling feature.

If the Switch CRC option is disabled, DMF switches do not check the CRC of incoming packets and do not drop packets with CRC errors. Also, switches do not generate a new CRC if the packet is modified. This mode is helpful if packets with CRC errors need to be delivered to a destination tool unmodified for analysis. If you disable the Switch CRC option, ensure the destination tool does not drop packets having CRC errors. Also, recognize that CRC errors will be caused by modification of packets by DMF options so that these CRC errors are not mistaken for CRC errors from the traffic source.

To disable the Switch CRC option, enter the following command from config mode:
controller-1(config)# no crc
Disabling CRC mode may cause problems to tunnel interface. Enter “yes” (or “y”) to continue: y
In the event the Switch CRC option is disabled, re-enable the Switch CRC option using the following command from config mode:
controller-1(config)# crc
Enabling CRC mode would cause packets with crc error dropped. Enter "yes" (or "y
") to continue: y
Tip: To enable or disable the CRC through the GUI, refer to the chapter, Check CRC using the GUI.
Note: When the Switch CRC option is disabled, packets going to the service node or recorder node are dropped because a new CRC is not calculated when push-per-policy or push-per-filter adds a VLAN tag.

CRC Check using the GUI

From the DMF Features page, proceed to the CRC Check feature card and perform the following steps to enable the feature.
  1. Select the CRC Check card.
    Figure 19. CRC Check Disabled
  2. Toggle the CRC Check to On.
  3. Confirm the activation by clicking Enableor Cancel to return to the DMF Features page.
    Figure 20. Enable CRC Check
  4. CRC Check is running.
    Figure 21. CRC Check Enabled
  5. To disable the feature, toggle the CRC Check to Off. Click Disable and confirm.
    Figure 22. Disable CRC Check
    The feature card updates with the status.
    Figure 23. CRC Check Disabled

Custom Priority

When custom priorities are allowed, non-admin users may assign policy priorities between 0 and 100 (the default value). However, when custom priorities are not allowed, the default priority of 100 will be automatically assigned to non-admin users' policies.

Enable and disable Custom Priority using the steps described in the following topics.

Configuring Custom Priority using the GUI

From the DMF Features page, proceed to the Custom Priority feature card and perform the following steps to enable the feature.
  1. Select the Custom Priority card.
    Figure 24. Custom Priority Disabled
  2. Toggle the Custom Priority to On.
  3. Confirm the activation by clicking Enableor Cancel to return to the DMF Features page.
    Figure 25. Enable Custom Priority
  4. Custom Priority is running.
    Figure 26. Custom Priority Enabled
  5. To disable the feature, toggle the Custom Priority to Off. Click Disable and confirm.
    Figure 27. Disable Custom Priority
    The feature card updates with the status.
    Figure 28. Custom Priority Disabled

Configuring Custom Priority using the CLI

To enable the Custom Priority, enter the following command:

controller-1(config)# allow-custom-priority

To disable the Custom Priority, enter the following command:

controller-1(config)# no allow-custom-priority

Device Deployment Mode

Complete the fabric switch installation in one of the following two modes:

  • Layer 2 Zero Touch Fabric (L2ZTF, Auto-discovery switch provisioning mode)

    In this mode, which is the default, Switch ONIE software automatically discovers the Controller via IPv6 local link addresses and downloads and installs the appropriate Switch Light OS image from the Controller. This installation method requires all the fabric switches and the DMF Controller to be in the same Layer 2 network (IP subnet). Also, suppose the fabric switches need IPv4 addresses to communicate with SNMP or other external services. In that case, users must configure IPAM, which provides the controller with a range of IPv4 addresses to allocate to the fabric switches.

  • Layer 3 Zero Touch Fabric (L3ZTF, Preconfigured switch provisioning mode)

    When fabric switches are in a different Layer 2 network from the Controller, log in to each switch individually to configure network information and download the ZTF installer. Subsequently, the switch automatically downloads Switch Light OS from the Controller. This mode requires communication between the Controller and the fabric switches using IPv4 addresses, and no IPAM configuration is required.

The following table summarizes the requirements for installation using each mode:
Requirement Layer 2 mode Layer 3 mode
Any switch in a different subnet from the controller No Yes
IPAM configuration for SNMP and other IPv4 services Yes No
IP address assignment IPv4 or IPv6 IPv4
Refer to this section (in User Guide) Using L2 ZTF (Auto-Discovery) Provisioning Mode Changing to Layer 3 (Pre-Configured) Switch Provisioning Mode

All the fabric switches in a single fabric must be installed using the same mode. If users have any fabric switches in a different IP subnet than the Controller, users must use Layer 3 mode for installing all the switches, even those in the same Layer 2 network as the Controller. Installing switches in mixed mode, with some switches using ZTF in the same Layer 2 network as the Controller, while other switches in a different subnet are installed manually or using DHCP is unsupported.

Configuring Device Deployment Mode using the GUI

From the DMF Features page, proceed to the Device Deployment Mode feature card and perform the following steps to manage the feature.

  1. Select the Device Deployment Mode card.
    Figure 29. Device Deployment Mode - Auto Discovery
  2. Enter the edit mode using the pencil icon.
    Figure 30. Configure Device Deployment Mode
  3. Change the switching mode as required using the drop-down menu. The default mode is Auto Discovery.
    Figure 31. Device Deployment Mode Options
    Figure 31. Device Deployment Mode - Pre-Configured Option
  4. Click Submit and confirm the operation when prompted.
  5. The Device Deployment Mode status updates.
    Figure 33. Device Deployment Mode - Status Update

Configuring Device Deployment Mode using the CLI

Device Deployment Mode has two options: select the desired option, either auto-discovery or pre-configured, as shown below:
controller-1(config)# deployment-mode auto-discovery

Changing device deployment mode requires modifying switch configuration. Enter "yes" (or "y") to continue: y
controller-1(config)# deployment-mode pre-configured

Changing device deployment mode requires modifying switch configuration. Enter "yes" (or "y") to continue: y

Inport Mask

Enable and disable Inport Mask using the steps described in the following topics.

InPort Mask using the CLI

DANZ Monitoring Fabric implements multiple flow optimizations to reduce the number of flows programmed in the DMF switch TCAM space. This feature enables effective usage of TCAM space, and it is on by default.

When this feature is off, TCAM rules are applied for each ingress port belonging to the same policy. For example, in the following topology, if a policy was configured with 10 match rules and filter-interface as F1 and F2, then 20 (10 for F1 and 10 for F2) TCAM rows were consumed.
Figure 34. Simple Inport Mask Optimization

With inport mask optimization, only 10 rules are consumed. This feature optimizes TCAM usage at every level (filer, core, delivery) in the DMF network.

Consider the more complex topology illustrated below:
Figure 35. Complex Inport Mask Optimization

In this topology, if a policy has N rules without in-port optimization, the policy will consume 3N at Switch 1, 3N at Switch 2, and 2N at Switch 3. With the in-port optimization feature enabled, the policy consumes only N rules at each switch.

However, this feature loses granularity in the statistics available because there is only one set of flow mods for multiple filter ports per switch. Statistics without this feature are maintained per filter port per policy.

With inport optimization enabled, the statistics are combined for all input ports sharing rules on that switch. You can obtain filter port statistics for different flow mods for each filter port. However, this requires disabling inport optimization, which is enabled by default.

To disable the inport optimization feature, enter the following command from config mode:
controller-1(config)# controller-1(config)# no inport-mask

Inport Mask using the GUI

From the DMF Features page, proceed to the Inport Mask feature card and perform the following steps to enable the feature.
  1. Select the Inport Mask card.
    Figure 36. Inport Mask Disabled
  2. Toggle the Inport Mask to On.
  3. Confirm the activation by clicking Enableor Cancel to return to the DMF Features page.
    Figure 37. Enable Inport Mask
  4. Inport Mask is running.
    Figure 38. Inport Mask Enabled
  5. To disable the feature, toggle the Inport Mask to Off. Click Disable and confirm.
    Figure 39. Disable Inport Mask
    The feature card updates with the status.
    Figure 40. Inport Mask Disabled

Match Mode

Switches have finite hardware resources available for packet matching on aggregated traffic streams. This resource allocation is relatively static and configured in advance. The DANZ Monitoring Fabric supports three allocation schemes, referred to as switching (match) modes:
  • L3-L4 mode (default mode): With L3-L4 mode, fields other than src-mac and dst-mac can be used for specifying policies. If no policies use src-mac or dst-mac, the L3-L4 mode allows more match rules per switch.
  • Full-match mode: With full-match mode, all matching fields, including src-mac and dst-mac, can be used while specifying policies.
  • L3-L4 Offset mode: L3-L4 offset mode allows matching beyond the L4 header up to 128 bytes from the beginning of the packet. The number of matches per switch in this mode is the same as in full-match mode. As with L3-L4 mode, matches using src-mac and dst-mac are not permitted.
    Note: Changing switching modes causes all fabric switches to disconnect and reconnect with the Controller. Also, all existing policies will be reinstalled. The switching mode applies to all DMF switches in the DANZ Monitoring Fabric. Switching between modes is possible, but any match rules incompatible with the new mode will fail.

Setting the Match Mode Using the CLI

To use the CLI to set the match mode, enter the following command:
controller-1(config)# match-mode {full-match | l3-l4-match | l3-l4-offset-match}
For example, the following command sets the match mode to full-match mode:
controller-1(config)# match-mode full-match

Setting the Match Mode Using the GUI

From the DMF Features page, proceed to the Match Mode feature card and perform the following steps to enable the feature.

  1. Select the Match Mode card.
    Figure 41. L3-L4 Match Mode
  2. Enter the edit mode using the pencil icon.
    Figure 42. Configure Switching Mode
  3. Change the switching mode as required using the drop-down menu. The default mode is L3-L4 Match.
    Figure 43. L3-L4 Match Options
  4. Click Submit and confirm the operation when prompted.
Note: An error message is displayed if the existing configuration of the monitoring fabric is incompatible with the specified switching mode.

Retain User Policy VLAN

Enable and disable Retain User Policy VLAN using the steps described in the following topics.

Retain User Policy VLAN using the CLI

This feature will send traffic to a delivery interface with the user policy VLAN tag instead of the overlap dynamic policy VLAN tag for traffic matching the dynamic overlap policy only. This feature is supported only in push-per-policy mode. For example, policy P1 with filter interface F1 and delivery interface D1, and policy P2 with filter interface F1 and delivery interface D2, and overlap dynamic policy P1_o_P2 is created when the overlap policy condition is met. In this case, the overlap dynamic policy is created with filter interface F1 and delivery interfaces D1 and D2. The user policy P1 assigns a VLAN (VLAN 10) and P2 assigns a VLAN (VLAN 20) when it is created, and the overlap policy also assigns a VLAN (VLAN 30) when it is dynamically created. When this feature is enabled, traffic forwarded to D1 will have a policy VLAN tag of P1 (VLAN 10) and D2 will have a policy VLAN tag of policy P2 (VLAN 20). When this feature is disabled, traffic forwarded to D1 and D2 will have the dynamic overlap policy VLAN tag (VLAN 30). By default, this feature is disabled.

Feature Limitations:

  • An overlap dynamic policy will fail when the overlap policy has filter (F1) and delivery interface (D1) on the same switch (switch A) and another delivery interface (D2) on another switch (switch B).
  • Post-to-delivery dynamic policy will fail when it has a filter interface (F1) and a delivery interface (D1) on the same switch (switch A) and another delivery interface (D2) on another switch (switch B).
  • Overlap policies may be reinstalled when a fabric port goes up or down when this feature is enabled.
  • Double-tagged VLAN traffic is not supported and will be dropped at the delivery interface.
  • Tunnel interfaces are not supported with this feature.
  • Only IPv4 traffic is supported; other non-IPv4 traffic will be dropped at the delivery interface.
  • Delivery interfaces with IP addresses (L3 delivery interfaces) are not supported.
  • This feature is not supported on EOS switches (Arista 7280 switches).
  • Delivery interface statistics may not be accurate when displayed using the sh policy command. This will happen when policy P1 has F1, D1, D2 and policy P2 has F1, D2. In this case, overlap policy P1_o_P2 will be created with delivery interfaces D1, D2. Since D2 is in both policies P1 and P2, overlap traffic will be forwarded to D2 with both the P1 policy VLAN and the P2 policy VLAN. The sh policy <policy_name> command will not show this doubling of traffic on delivery interface D2. Delivery interface statistics will show this extra traffic forwarded from the delivery interface.
To enable this feature, enter the following command:
controller-1(config)# retain-user-policy-vlan
This will enable retain-user-policy-vlan feature. Non-IP packets will be dropped at delivery. Enter
"yes" (or "y") to continue: yes
To see the current Retain Policy VLAN configuration, enter the following command:
controller-1> show fabric
~~~~~~~~~~~~~~~~~~~~~~~~~ Aggregate Network State ~~~~~~~~~~~~~~~~~~~~~~~~~
Number of switches : 14
Inport masking : True
Number of unmanaged services : 0
Number of switches with service interfaces : 0
Match mode : l3-l4-offset-match
Number of switches with delivery interfaces : 11
Filter efficiency : 1:1
Uptime : 4 days, 8 hours
Max overlap policies (0=disable) : 10
Auto Delivery Interface Strip VLAN : True
Number of core interfaces : 134
State : Enabled
Max delivery BW (bps) : 2.18Tbps
Health : unhealthy
Track hosts : True
Number of filter interfaces : 70
Number of policies : 101
Start time : 2022-02-28 16:18:01.807000 UTC
Number of delivery interfaces : 104
Retain User Policy Vlan : True

This feature can be used with the strip-second-vlan option during delivery interface configuration to preserve the outer DMF fabric policy VLAN, strip the inner VLAN of traffic forwarded to a tool, or the strip-no-vlan option during delivery interface configuration.

Retain User Policy VLAN using the GUI

From the DMF Features page, proceed to the Retain User Policy VLAN feature card and perform the following steps to enable the feature.
  1. Select the Retain User Policy VLAN card.
    Figure 44. Retain User Policy VLAN Disabled
  2. Toggle the Retain User Policy VLAN to On.
  3. Confirm the activation by clicking Enableor Cancel to return to the DMF Features page.
    Figure 45. Enable Retain User Policy VLAN
  4. Retain User Policy VLAN is running.
    Figure 46. Retain User Policy VLAN Enabled
  5. To disable the feature, toggle the Retain User Policy VLAN to Off. Click Disable and confirm.
    Figure 47. Disable Retain User Policy VLAN
    The feature card updates with the status.
    Figure 48. Retain User Policy VLAN Disabled

Tunneling

For more information about Tunneling please refer to the Understanding Tunneling section.

Enable and disable Tunneling using the steps described in the following topics.

Configuring Tunneling using the GUI

From the DMF Features page, proceed to the Tunneling feature card and perform the following steps to enable the feature.
  1. Select the Tunneling card.
    Figure 49. Tunneling Disabled
  2. Toggle Tunneling to On.
  3. Confirm the activation by clicking Enable or Cancel to return to the DMF Features page.
    Figure 50. Enable Tunneling
    Note: CRC Check must be running before attempting to enable Tunneling. An error message displays if CRC Check is not enabled. Proceeding to click Enable results in a validation error message. Refer to the CRC Check section for more information on configuring the CRC Check feature.
    Figure 51. CRC Check Warning Message
  4. Tunneling VLAN is running.
    Figure 52. Tunneling Enabled
  5. To disable the feature, toggle Tunneling to Off. Click Disable and confirm.
    Figure 53. Disable Tunneling
    The feature card updates with the status.
    Figure 54. Tunneling VLAN Disabled

Configuring Tunneling using the CLI

To enable the Tunneling, enter the following command:
controller-1(config)# tunneling 

Tunneling is an Arista Licensed feature. 
Please ensure that you have purchased the license for tunneling before using this feature. 
Enter "yes" (or "y") to continue: y
controller-1(config)#
To disable the Tunneling, enter the following command:
controller-1(config)# no tunneling 
This would disable tunneling feature? Enter "yes" (or "y") to continue: y
controller-1(config)#

VLAN Preservation

In DANZ Monitoring Fabric (DMF), metadata is appended to the packets forwarded by the fabric to a tool attached to a delivery interface. This metadata is encoded primarily in the outer VLAN tag of the packets.

By default (using the auto-delivery-strip feature), this outer VLAN tag is always removed on egress upon delivery to a tool.

The VLAN preservation feature introduces a choice to selectively preserve a packet's outer VLAN tag instead of stripping or preserving all of it.

VLAN preservation works in both push-per-filter and push-per-policy mode for auto-assigned and user-configured VLANs.

Note: VLAN preservation applies to switches running SWL OS and does not apply to switches running EOS.

This functionality only supports 2000 VLAN IDs and port combinations per switch.

Support for VLAN preservation is on select Broadcom® switch ASICs. Ensure your switch model supports this feature before attempting to configure it.

Using the CLI to Configure VLAN Preservation

Configure VLAN preservation at two levels: global and local. A local configuration can override the global configuration.

Global Configuration

Enable VLAN preservation globally using the vlan-preservation command from the config submode to apply aglobal configuration.

(config)# vlan-preservation
Two options exist while in the config-vlan-preservation submode:
  • preserve-user-configured-vlans
  • preserve-vlans

Use the help function to list the options by entering a ? (question mark).

(config-vlan-preservation)# ?
Commands:
preserve-user-configured-vlans Preserve all user-configured VLANs for all delivery interfaces
preserve-vlanConfigure VLAN ID to preserve for all delivery interfaces

Use the preserve-user-configured-vlans option to preserve all user-configured VLANs. The packets with the user-configured VLANs will have their fabric-applied VLAN tags preserved even after leaving the respective delivery interface.

(config-vlan-preservation)# preserve-user-configured-vlans

Use the preserve-vlan option to specify and preserve a particular VLAN ID. Any VLAN ID may be provided. In the following example, the packets with VLAN ID 100 or 200 will have their fabric-applied VLAN tags preserved upon delivery to the tool.

(config-vlan-preservation)# preserve-vlan 100
(config-vlan-preservation)# preserve-vlan 200

Local Configuration

This feature applies to delivery and both-filter-and-delivery interface roles.

Fabric-applied VLAN tag preservation can be enabled locally on each delivery interface as an alternative to the global VLAN preservation configuration. To enable this functionality locally, enter the following configuration submode using the if-vlan-preservation command to specify either one of the two available options. Use the help function to list the options by entering a ? (question mark).

(config-switch-if)# if-vlan-preservation
(config-switch-if-vlan-preservation)# ?
Commands:
preserve-user-configured-vlans Preserve all user-configured VLANs for all delivery interfaces
preserve-vlanConfigure VLAN ID to preserve for all delivery interfaces

Use the preserve-user-configured-vlans option to preserve all user-configured VLAN IDs in push-per-policy or push-per-filter mode on a selected delivery interface. All packets egressing such delivery interface will have their user-configured fabric VLAN tags preserved.

(config-switch-if-vlan-preservation)# preserve-user-configured-vlans

Use the preserve-vlan option to specify and preserve a particular VLAN ID. For example, if any packets with VLAN ID 100 or 300 egress the selected delivery interface, VLAN IDs 100 and 300 will be preserved.

(config-switch-if-vlan-preservation)# preserve-vlan 100
(config-switch-if-vlan-preservation)# preserve-vlan 300
Note: Any local vlan-preservation configuration overrides the global configuration for the selected interfaces by default.

On an MLAG delivery interface, the local configuration follows the same model, as shown below.

(config-mlag-domain-if)# if-vlan-preservation member role 
(config-mlag-domain-if)# if-vlan-preservation 
(config-mlag-domain-if-vlan-preservation)# preserve-user-configured-vlans preserve-vlan

To disable selective VLAN preservation for a particular delivery or both-filter-and-delivery interface, use the following command to disable the feature's global and local configuration for the selected interface:

(config-switch-if)# role delivery interface-name del 
<cr> no-analyticsstrip-no-vlan strip-second-vlan
ip-addressno-vlan-preservationstrip-one-vlanstrip-two-vlan
(config-switch-if)# role delivery interface-name del no-vlan-preservation

CLI Show Commands

The following show command displays the name of the device on which VLAN preservation is enabled and the information about which VLAN is preserved on specific selected ports. Use the data in this table primarily for debugging purposes.

# show switch all table vlan-preserve 
# Vlan-preserve Device name Entry key
-|-------------|-----------|----------------------|
1 0 delivery1 VlanVid(0x64), Port(6)
2 0 filter1 VlanVid(0x64), Port(6)
3 0 core1 VlanVid(0x64), Port(6)

Using the GUI to Configure VLAN Preservation

VLAN preservation can be configured at global and local levels. A local configuration can override the global configuration. Follow the steps outlined below to configure a Global Configuration (steps 1 - 4), Local Configuration (steps 5-7), or an MLAG Delivery Interface configuration within an MLAG domain (step 8).

Global Configuration

  1. To view or edit the global configuration, navigate to the DANZ Monitoring Fabric (DMF) Features page by clicking the gear icon in the top right of the navigation bar.
    Figure 55. DMF Menu Bar
    The DMF Feature allows for managing fabric-wide settings for DMF.
  2. Scroll to the VLAN Preservation card.
    Figure 56. DMF Features Page
    Figure 57. VLAN Preservation Card
  3. Click the Edit button (pencil icon) to configure or modify the global VLAN Preservation feature settings.
    Figure 58. Edit VLAN Preservation Configuration
    The edit screen has two input sections:
    • Toggle on or off the Preserve User Configured VLANs.
    • Enter the parameters for VLAN Preserve using the following functions:
      • Use the Add VLAN button to add VLAN IDs.
      • Select the Single VLAN type drop-down to add a single VLAN ID.
      • Select the Range VLAN type drop-down to add a continuous VLAN ID range.
      • Use the Trash button (delete) to delete a single VLAN ID or a VLAN ID range.
  4. Click the Submit button to save the configuration.
     

Local Configuration

  1. The VLAN Preservation configuration can be applied per-delivery interface while configuring or editing a delivery or filter-and-delivery interface in the Monitoring Interfaces page and Monitoring Interfaces > Delivery Interfaces page.
    Figure 59. Monitoring Interfaces Delivery Interface Create Interface
  2. The following inputs are available for the local feature configuration:
    • Toggle the Preserve User Configured VLANs button to on. Use this option to preserve all user-configured VLAN IDs in push-per-policy or push-per-filter mode on a selected delivery interface. The packets with the user-configured VLANs will have their fabric-applied VLAN tags preserved even after leaving the respective delivery interface.
    • VLAN Preserve. Use the + and - icon buttons to add and remove VLAN IDs.
    • Toggle the VLAN Preservation button to on. Disabling this option will ignore this feature configuration given globally/locally for this delivery interface. VLAN Preservation is enabled by default.
  3. Click the Save button to save the configuration.
     

VLAN Preservation for MLAG Delivery Interfaces

  1. Configure VLAN preservation for MLAG delivery interfaces using the Fabric > MLAGs page while configuring an MLAG Domain toggling the VLAN Preservation and Preserve User Configured VLANs switches to on (as required).
    Figure 60. Create MLAG Domain
    Figure 61. MLAG VLAN Preservation & Preserve User Configured VLANs (expanded view)

Troubleshooting

Use the following commands to troubleshoot the scenario in which a tool attached to a delivery interface expects a packet with a preserved VLAN tag, but instead, there is no tag attached to it; double-check the following.
  1. A partial policy installation may occur if any delivery interface fails to preserve the VLAN tag. This can happen when exceeding the 2000 VLAN ID/Port combination limit. Use the show policy policy-name command to obtain a detailed status, as shown in the following example:
    (config)# show policy vlan-999
    Policy Name: vlan-999
    Config Status: active - forward
    Runtime Status : installed but partial failure
    Detailed Status: installed but partial failure - 
     Failed to preserve VLAN's on some/all 
     delivery interfaces, see warnings for details
    Priority : 100
    Overlap Priority : 0
    # of switches with filter interfaces : 1
    # of switches with delivery interfaces : 1
    # of switches with service interfaces: 0
    # of filter interfaces : 1
    # of delivery interfaces : 1
    # of core interfaces : 2
    # of services: 0
    # of pre service interfaces: 0
    # of post service interfaces : 0
    Push VLAN: 999
    Post Match Filter Traffic: -
    Total Delivery Rate: -
    Total Pre Service Rate : -
    Total Post Service Rate: -
    Overlapping Policies : none
    Component Policies : none
    Installed Time : 2023-11-06 21:01:11 UTC
    Installed Duration : 1 week
  2. Verify the running config and review if the VLAN preservation configuration is enabled for that VLAN ID and on that delivery interface.
    (config-vlan-preservation)# show running-config | grep "preserve"
    ! vlan-preservation
    vlan-preservation
    preserve-vlan 100
  3. Verify the show switch switch-name table vlan-preserve output. It displays the ports and VLAN ID combinations that are enabled.
    (config-policy)# show switch core1 table vlan-preserve
    # Vlan-preserve Device name Entry key
    -|-------------|-----------|----------------------|
    1 0 core1 VlanVid(0x64), Port(6)
  4. The same configuration can be verified from a switch (e.g., core1) by using the command below:
    root@core1:~# ofad-ctl gt vlan_preserve
    VLAN PRESERVE TABLE:
    --------------------
    VLAN:100Port:6PortClass:6
  5. Verify if a switch has any associated preserve VLAN warnings among the fabric warnings:
    (config-vlan-preservation)# show fabric warnings | grep "preserve
    1 delivery1 (00:00:52:54:00:85:ca:51) Switch 00:00:52:54:00:85:ca:51 
    cannot preserve VLANs for some interfaces due to resource exhaustion.
  6. The show fabric warnings feature-unsupported-on-device command provides information on whether VLAN preservation is configured on any unsupported devices:
    (config-switch)# show fabric warnings feature-unsupported-on-device 
    # Name Warning
    -|----|------------------------------------------------------------|
    1 del1 VLAN preservation feature is not supported on EOS switch eos
If you find any preserve VLAN fabric warnings, please contact the This email address is being protected from spambots. You need JavaScript enabled to view it. for assistance.

Reuse of Policy VLANs

From DMF Release 8.2, policies can reuse VLANs for policies in different switch islands. A switch island is an isolated fabric managed by a single pair of controllers; there is no data plane connection between fabrics in different switch islands. For example, with a single Controller pair managing six switches (switch1, switch2, switch3, switch4, switch5, and switch6), you can create two fabrics with three switches each (switch1, switch2, switch3 in one switch island and switch4, switch5, and switch6 in another switch island), as long as there is no data plane connection between switches in the different switch islands.

There is no command needed to enable this feature. If the above condition is met, creating policies in each switch island with the same policy VLAN tag is supported.

In the condition mentioned above, assign the same policy VLAN to two policies in different switch islands using the push-vlan <vlan-tag> command under policy configuration. For example, policy P1 in switch island 1 assigned push-vlan 10, and policy P2 in switch island 2 assigned the same vlan tag 10 using push-vlan 10 under policy configuration.

When a data plane link connects two switch islands, it becomes one switch island. In that case, two policies cannot use the same policy vlan tag, so one of the policies (P1 or P2) will become inactive.

Rewriting the VLAN ID for a Filter Interface

When sharing a destination tool with multiple filter interfaces, use the VLAN identifier assigned by the rewrite VLAN option to identify the ingress filter interface for specific packets. To use the rewrite VLAN option, assign a unique VLAN identifier to each filter interface. This VLAN ID should be outside the auto-VLAN range.
Note: In push-per-policy mode, you can not enable the rewrite VLAN feature on filter interfaces. If you try, a validation error is displayed. This feature is available only in the push-per-filter mode.
The following commands change the VLAN tag on packets received on the interface ethernet10 on f-switch1 to 100. The role command in this example also assigns the alias TAP-PORT-1 to Ethernet interface 10.
controller-1(config)# switch f-switch1
controller-1(config-switch-if)# interface ethernet10
controller-1(config-switch-if)# role filter interface-name TAP-PORT-1 rewrite vlan 100
The rewrite VLAN option overwrites the original VLAN frame tag if it was already tagged, and this changes the CRC checksum so it no longer matches the modified packet. The switch CRC option, enabled by default, rewrites the CRC after the frame has been modified so that a CRC error does not occur.
Note: Starting with DMF Release 7.1.0, simultaneously rewriting the VLAN ID and MAC address is supported and uses VLAN rewriting to isolate traffic while using MAC rewriting to forward traffic to specific VMs.

Reusing Filter Interface VLAN IDs

A DMF fabric comprises groups of switches, known as islands, connected over the data plane. There are no data plane connections between switches in different islands. When Push-Per-Filter forwarding is enabled, monitored traffic is forwarded within an island using the VLAN ID associated with a Filter Interface. These VLAN IDs are configurable. Previously, the only recommended configuration was for these VLAN IDs to be globally unique.

This feature adds official support for associating the same VLAN ID with multiple Filter Interfaces as long as they are in different islands. This feature provides more flexibility when duplicating Filter Interface configurations across islands and helps prevent using all available VLAN IDs.

Note that within each island, VLAN IDs must still be unique, which means that Filter Interfaces in the same group of switches cannot have the same ID. Suppose you try to reuse the same VLAN ID within an island. In that case, a fabric error is generated, and only the first Filter Interface (as sorted alphanumerically by DMF name) remains in use.

Configuration

This feature requires no special configuration beyond the existing Filter Interface configuration workflow.

Troubleshooting

A fabric error occurs if the same VLAN ID is configured more than once in the same island. The error message includes the Filter Interface name, the switch name, and the VLAN ID that is not unique. When encountering this error, pick a different non-conflicting VLAN ID.

Filter Interface invalid VLAN errors can be displayed in the CLI using the following command:

The following is a vertical representation of the CLI output above for illustrative purposes only.

>show fabric errors filter-interface-invalid-vlan 
~~ Invalid Filter Interface VLAN(s) ~~
# 1 
DMF Namefilter1-f1
IF Name ethernet2
Switchfilter1 (00:00:52:54:00:4b:c9:bc)
Rewrite VLAN1
Details The configured rewrite VLAN 1 for filter interface filter1-f1
is not unique within its fabric. 
It is helpful to know all of the switches in an island. The following command lists all of the islands (referred to in this command as switch clusters) and their switch members:
>show debug switch-cluster 
# Member 
-|--------------|
1 core1, filter1
It can also be helpful to know how the switches within an island are interconnected. Use the following command to display all the links between the switches:
>show link all
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Links ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Active State Src switch Src IF Name Dst switch Dst IF Name Link Type Since 
-|------------|----------|-----------|----------|-----------|---------|-----------------------|
1 active filter1ethernet1 core1ethernet1 normal2023-05-24 22:31:39 UTC
2 active core1ethernet1 filter1ethernet1 normal2023-05-24 22:31:40 UTC

Considerations

  • VLAN IDs must be unique within an island. Filter Interfaces in the same island with the same VLAN ID are not supported.
  • This feature only applies to manually configured Filter Interface VLAN IDs. VLAN IDs that are automatically assigned are still unique across the entire fabric.

Using Push-per-filter Mode

The push-per-filter mode setting does not enable tag-based forwarding. Each filter interface is automatically assigned a VLAN ID; the default range is 1 to 4094. To change the range, use the auto-vlan-range command.

You can manually assign a VLAN not included in the defined range to a filter interface.

To manually assign a VLAN to a filter interface in push-per-filter mode, complete the following steps:

  1. Change the auto-vlan-range from the default (1-4094) to a limited range, as in the following example:
    controller-1(config)# auto-vlan-range vlan-min 1 vlan-max 1000

    The example above configures the auto-VLAN feature to use VLAN IDs from 1 to 1000.

  2. Assign a VLAN ID to the filter interface that is not in the range assigned to the auto-VLAN feature.
    controller-1(config)# role filter interface-name TAP-1 rewrite vlan 1001

Tag-based Forwarding

The DANZ Monitoring Fabric (DMF) Controller configures each switch with forwarding paths based on the most efficient links between the incoming filter interface and the delivery interface, which is connected to analysis tools. The TCAM capacity of the fabric switches may limit the number of policies you can configure. The Controller can also use VLAN tag-based forwarding, which reduces the TCAM resources required to implement a policy.

Tag-based forwarding is automatically enabled when the auto-VLAN Mode is push-per-policy, which is the default. This configuration improves traffic forwarding within the monitoring fabric. DMF uses the assigned VLAN tags to forward traffic to the correct delivery interface, saving TCAM space. This feature is handy when using switches based on the Tomahawk chipset because these switches have higher throughput but reduced TCAM space.

Policy Rule Optimization

 

Prefix Optimization

A policy can match with a large number of IPv4 or IPv6 addresses. These matches can be configured explicitly on each match rule, or the match rules can use an address group. With prefix optimization based on IPv4, IPv6, and TCP ports, DANZ Monitoring Fabric (DMF) uses efficient masking algorithms to minimize the number of flow entries in hardware.

Example 1: Optimize the same mask addresses.
controller-1(config)# policy ip-addr-optimization
controller-1(config-policy)# action forward
controller-1(config-policy)# delivery-interface TOOL-PORT-1
controller-1(config-policy)# filter-interface TAP-PORT-1
controller-1(config-policy)# 10 match ip dst-ip 1.1.1.0 255.255.255.255
controller-1(config-policy)# 11 match ip dst-ip 1.1.1.1 255.255.255.255
controller-1(config-policy)# 12 match ip dst-ip 1.1.1.2 255.255.255.255
controller-1(config-policy)# 13 match ip dst-ip 1.1.1.3 255.255.255.255
controller-1(config-policy)# show policy ip-addr-optimization optimized-match
Optimized Matches :
10 ether-type 2048 dst-ip 1.1.1.0 255.255.255.252
Example 2: In this case, if a generic prefix exists, all the specific addresses are not programmed in TCAM.
controller-1(config)# policy ip-addr-optimization
controller-1(config-policy)# action forward
controller-1(config-policy)# delivery-interface TOOL-PORT-1
controller-1(config-policy)# filter-interface TAP-PORT-1
controller-1(config-policy)# 10 match ip dst-ip 1.1.1.0 255.255.255.255
controller-1(config-policy)# 11 match ip dst-ip 1.1.1.1 255.255.255.255
controller-1(config-policy)# 12 match ip dst-ip 1.1.1.2 255.255.255.255
controller-1(config-policy)# 13 match ip dst-ip 1.1.1.3 255.255.255.255
controller-1(config-policy)# 100 match ip dst-ip 1.1.0.0 255.255.0.0
controller-1(config-policy)# show policy ip-addr-optimization optimized-match
Optimized Matches :
100 ether-type 2048 dst-ip 1.1.0.0 255.255.0.0
Example 3: IPv6 prefix optimization. In this case, if a generic prefix exists, the specific addresses are not programmed in the TCAM.
controller-1(config)# policy ip-addr-optimization
controller-1(config-policy)# 25 match ip6 src-ip 2001::100:100:100:0 FFFF:FFFF:FFFF::0:0
controller-1(config-policy)# 30 match ip6 src-ip 2001::100:100:100:0 FFFF:FFFF::0
controller-1(config-policy)# show policy ip-addr-optimization optimized-match
Optimized Matches :
30 ether-type 34525 src-ip 2001::100:100:100:0 FFFF:FFFF::0
Example 4: Different subnet prefix optimization. In this case, addresses belonging to different subnets are optimized.
controller-1(config)# policy ip-addr-optimization
controller-1(config-policy)# 10 match ip dst-ip 2.1.0.0 255.255.0.0
controller-1(config-policy)# 11 match ip dst-ip 3.1.0.0 255.255.0.0
controller-1(config-policy)# show policy ip-addr-optimization optimized-match
Optimized Matches : 10 ether-type 2048 dst-ip 2.1.0.0 254.255.0.0

Transport Port Range and VLAN Range Optimization

The DANZ Monitoring Fabric (DMF) optimizes transport port ranges and VLAN ranges within a single match rule. Improvements in DMF version 8.5 now support cross-match rule optimization.

Show Commands

To view the optimized match rule, use the show command:

# show policy policy-name optimized-match

To view the configured match rules, use the following command:

# show running-config policy policy-name

Consider the following DMF policy configuration.

# show running-config policy p1
! policy
policy p1
action forward
delivery-interface d1
filter-interface f1
1 match ip vlan-id-range 1 4
2 match ip vlan-id-range 5 8
3 match ip vlan-id-range 7 16
4 match ip vlan-id-range 10 12

With the above policy configuration and before the DMF 8.5.0 release, the four match conditions would be optimized into the following TCAM rules:

# show policy p1 optimized-match
Optimized Matches :
1 ether-type 2048 vlan 0 vlan-mask 4092
1 ether-type 2048 vlan 4 vlan-mask 4095
2 ether-type 2048 vlan 5 vlan-mask 4095
2 ether-type 2048 vlan 6 vlan-mask 4094
3 ether-type 2048 vlan 16 vlan-mask 4095
3 ether-type 2048 vlan 8 vlan-mask 4088

However, with the cross-match rule optimizations introduced in this release, the rules installed in the switch would further optimize TCAM usage, resulting in:

# show policy p1 optimized-match
Optimized Matches :
1 ether-type 2048 vlan 0 vlan-mask 4080
1 ether-type 2048 vlan 16 vlan-mask 4095

A similar optimization technique applies to L4 ports in match conditions:

# show running-config policy p1
! policy
policy p1
action forward
delivery-interface d1
filter-interface f1
1 match tcp range-src-port 1 4
2 match tcp range-src-port 5 8
3 match tcp range-src-port 7 16
4 match tcp range-src-port 9 14

# show policy p1 optimized-match
Optimized Matches :
1 ether-type 2048 ip-proto 6 src-port 0 -16
1 ether-type 2048 ip-proto 6 src-port 16 -1

Switch Dual Management Port

 

Overview

When a DANZ Monitoring Fabric (DMF) switch disconnects from the Controller, the switch is taken out of the fabric, causing service interruptions. The dual management feature solves the problem by providing physical redundancy of the switch-to-controller management connection. DMF achieves this by allocating a switch data path port to be bonded with its existing management interface, thereby acting as a standby management interface. Hence, it eliminates a single-point failure in the management connectivity between the switch and the Controller.

Once an interface on a switch is configured for management, this configuration persists across reboots and upgrades until explicitly disabling the management configuration on the Controller.

Configure an interface for dual management using the CLI or the GUI.

Note: Along with the configuration on the Controller detailed below, dual management requires a physical connection in the same subnet as the primary management link from the data port to a management switch.

Configuring Dual Management Using the CLI

  1. From config mode, specify the switch to be configured with dual management, as in the following example:
    Controller-1(config)# switch DMF-SWITCH-1
    Controller-1(config-switch)#

    The CLI changes to the config-switch submode, which lets you configure the specified switch.

  2. From config-switch mode, enter the interface command to specify the interface to be configured as the standby management interface:
    Controller-1(config-switch)# interface ethernet40
    Controller-1(config-switch-if)#

    The CLI changes to the config-switch-if submode, which lets you configure the specified interface.

  3. From config-switch-if mode, enter the management command to specify the role for the interface:
    Controller-1(config-switch-if)# management
    Controller-1(config-switch-if)#
Note: When assigning an interface to a management role, no other interface-specific commands are honored for that interface (e.g., shut-down, role, speed, etc.).

Configuring Dual Management Using the GUI

  1. Select Fabric > Switches from the main menu.
    Figure 62. Controller GUI Showing Fabric Menu List
  2. Click on the switch name to be configured with dual management.
    Figure 63. Controller GUI Showing Inventory of Switches
  3. Click on the Interfaces tab.
    Figure 64. Controller GUI Showing Switch Interfaces
  4. Identify the interface to be configured as the standby management interface.
    Figure 65. Controller GUI Showing Configure Knob
  5. Click on the Menu button to the left of the identified interface, then click on Configure.
    Figure 66. Controller GUI Showing Interface Settings
  6. Set Use for Management Traffic to Yes. This action configures the interface to the standby management role.
    Figure 67. Use for Management Traffic
  7. Click Save.

Management Interface Selection Using the GUI

By default, the dedicated management interface serves as the management port, with the front panel data port acting as a backup only when the management interface is unavailable:
  • When the dedicated management interface fails, the front panel data port becomes active as the management port.
  • When the dedicated management interface returns, it becomes the active management port.

When the management network is undependable, this can lead to switch disconnects. The Management Interface choice dictates what happens when the management interface returns after a failover. Make this selection using the GUI or the CLI.

Select Fabric > Switches .
Figure 68. Fabric Switches
Click on the switch name to be configured.
Figure 69. Switch Inventory
Select the Actions tab.
Figure 70. Switch Actions

Click the Configure Switch icon and choose the required Management Interface setting.

Figure 71. Controller GUI Showing Dual Management Settings

If selecting Prefer Dedicated Management Interface (the default), when the dedicated management interface goes down, the front panel data port becomes the active management port for the switch. When the dedicated management port comes back up, the dedicated management port becomes the active management port again, putting the front panel data port in an admin down state.

If selecting Prefer Current Interface, when the dedicated management interface goes down, the front panel data port still becomes the active management port for the switch. However, when the dedicated management port comes back up, the front panel data port continues to be the active management port.

Management Interface Selection Using the CLI

By default, the dedicated management interface serves as the management port, with the front panel data port acting as a backup only when the management interface is unavailable:
  • When the dedicated management interface fails, the front panel data port becomes active as the management port.
  • When the dedicated management interface returns, it becomes the active management port.

When the management network is undependable, this can lead to switch disconnects. The management interface selection choice dictates what happens when the management interface returns after a failover.

Controller-1(config)# switch DMF-SWITCH-1
Controller-1(config-switch)#management-interface-selection ?
prefer-current-interface Set management interface selection algorithm
prefer-dedicated-management-interface Set management interface selection algorithm (default selection)
Controller-1(config-switch)#

If selecting prefer-dedicated-management-interface (the default), when the dedicated management interface goes down, the front panel data port becomes the active management port for the switch. When the dedicated management port comes back up, the dedicated management port becomes the active management port again, putting the front panel data port in an admin down state.

If selecting prefer-current-interface, when the dedicated management interface goes down, the front panel data port still becomes the active management port for the switch. However, when the dedicated management port comes back up, the front panel data port continues to be the active management port.

Switch Fabric Management Redundancy Status

To check the status of all switches configured with dual management as well as the interface that is being actively used for management, enter the following command in the CLI:

Controller-1# show switch all mgmt-stats

Additional Notes

  • A maximum of one data-plane interface on a switch can be configured as a standby management interface.
  • The switch management interface ma1 is a bond interface, having oma1 as the primary link and the data plane interface as the secondary link.
  • The bandwidth of the data-plane interface is limited regardless of the physical speed of the interface. Arista Networks recommends immediate remediation when the oma1 link fails.

Controller Lockdown

Controller lockdown mode, when enabled, disallows user configuration such as policy configuration, inline configuration, and rebooting of fabric components and disables data path event processing. If there is any change in the data path, it will not be processed.

The primary use case for this feature is a planned management switch upgrade. During a planned management switch upgrade, DANZ Monitoring Fabric (DMF) switches disconnect from the Controller, and DMF policies are reprogrammed, disrupting traffic forwarding to tools. Enabling this feature before starting a management switch upgrade will not disrupt the existing DMF policies when DMF switches disconnect from the Controller, thereby forwarding traffic to the tools.

Note:DMF policies are reprogrammed when the switches reconnect to the DMF fabric when Controller lockdown mode is disabled after the management switch upgrade is completed. Controller lockdown mode is a special operation and should not be enabled for a prolonged period.
  • Operations such as switch reboot, Controller reboot, Controller failover, Controller upgrade, policy configuration, etc., are disabled when Controller lockdown mode is enabled.
  • The command to enable Controller lockdown mode, system control-plane-lockdown enable, is not saved to the running config. Hence, Controller lockdown mode is disabled after Controller power down/up. When failover happens with a redundant Controller configured, the new active Controller will be in Controller lockdown mode but may not have all policy information.
  • In Controller lockdown mode, copying the running config to a snapshot will not include the system control-plane-lockdown enable command.
  • The CLI prompt will start with the prefix LOCKDOWN when this feature is enabled.
  • Link up/down and other events during Controller lockdown mode are processed after Controller lockdown mode is disabled.
  • All the events handled by the switch are processed in Controller lockdown mode. For example, traffic is hashed to other members automatically in Controller lockdown mode if one LAG member fails. Likewise, all switch-handled events related to inline are processed in Controller lockdown mode.
Use the below commands to enable Controller lockdown mode. Only an admin user can enable or disable this feature.
Controller# configure
Controller(config)# system control-plane-lockdown enable
Enabling control-plane-lockdown may cause service interruption. Do you want to continue ("y" or "yes
" to continue):yes
LOCKDOWN Controller(config)#
To disable Controller lockdown mode, use the command below:
LOCKDOWN Controller(config)# system control-plane-lockdown disable
Disabling control-plane-lockdown will bring the fabric to normal operation. This may cause some
service interruption during the transition. Do you want to continue ("y" or "yes" to continue):
yes
Controller(config)#

CPU Queue Stats and Debug Counters

SwitchLight OS (SWL) switches can now report their CPU queue statistics and debug counters. To view these statistics, use the DANZ Monitoring Fabric (DMF) Controller CLI. DMF exports the statistics to any connected DMF Analytics Node.

The CPU queue statistics provide visibility into the different queues that the switch uses to prioritize packets needing to be processed by the CPU. Higher-priority traffic is assigned to higher-priority queues.

The SWL debug counters, while not strictly limited to packet processing, include information related to the Packet-In Multiplexing Unit (PIMU). The PIMU performs software-based rate limiting and acts as a second layer of protection for the CPU, allowing the switch to prioritize specific traffic.

Note: The feature runs on all SWL switches supported by DMF.

Configuration

These statistics are collected automatically and do not require any additional configuration to enable.

To export statistics, configure a DMF Analytics Node. Please refer to the DMF User Guide for help configuring an Analytics Node.

Show Commands

Showing the CPU Queue Statistics

The following command shows the statistics for the CPU queues on a single switch.
controller-1> show switch FILTER-SWITCH-1 queue cpu
# SwitchOF Port Queue ID TypeTx Packets Tx BytesTx Drops Usage 
-|-----------------|-------|--------|---------|----------|---------|--------|---------------------------------|
1 FILTER-SWITCH-1 local 0multicast 830886 164100990 0lldp, l3-delivery-arp, tunnel-arp
2 FILTER-SWITCH-1 local 1multicast 00 0l3-filter-arp, analytics
3 FILTER-SWITCH-1 local 2multicast 00 0
4 FILTER-SWITCH-1 local 3multicast 00 0
5 FILTER-SWITCH-1 local 4multicast 00 0sflow
6 FILTER-SWITCH-1 local 5multicast 00 0
7 FILTER-SWITCH-1 local 6multicast 00 0l3-filter-icmp

There are a few things to note about this output:

  • The CPU's logical port is also known as the local port.
  • The counter values shown are based on the last time the statistics were cleared.
  • Different CPU queues may be used for various types of traffic. The Usage column displays the traffic that an individual queue is handling. Not every CPU queue is used.

The details token can be added to view more information. This includes the absolute (or raw) counter values, the last updated time, and the last cleared time.

Showing the Debug Counters

The following command shows all of the debug counters for a single switch:
controller-1> show switch FILTER-SWITCH-1 debug-counters 
#SwitchNameValue Description 
--|-----------------|-------------------------------|-------|-------------------------------------------|
1FILTER-SWITCH-1 arpra.total_in_packets1183182 Packet-ins recv'd by arpra
2FILTER-SWITCH-1 debug_counter.register79Number of calls to debug_counter_register
3FILTER-SWITCH-1 debug_counter.unregister21Number of calls to debug_counter_unregister
4FILTER-SWITCH-1 pdua.total_pkt_in_cnt 1183182 Packet-ins recv'd by pdua
5FILTER-SWITCH-1 pimu.hi.drop8 Packets dropped
6FILTER-SWITCH-1 pimu.hi.forward 1183182 Packets forwarded
7FILTER-SWITCH-1 pimu.hi.invoke1183190 Rate limiter invoked
8FILTER-SWITCH-1 sflowa.counter_request9325983 Counter requests polled by sflowa
9FILTER-SWITCH-1 sflowa.packet_out 7883772 Sflow datagrams sent by sflowa
10 FILTER-SWITCH-1 sflowa.port_features_update 22Port features updated by sflowa
11 FILTER-SWITCH-1 sflowa.port_status_notification 428 Port status notif's recv'd by sflowa

The counter values shown are based on the last time the statistics were cleared.

Add the name or the ID token and a debug counter name or ID to filter the output.

Add the details token to view more information. This includes the debug counter ID, the absolute (or raw) counter values, the last updated time, and the last cleared time.

Clear Commands

Clearing the Debug Counters

The following command will clear all of the debug counters for a single switch:
controller-1# clear statistics debug-counters

Clearing all Statistics

To clear both the CPU queue stats and the debug counters for every switch, use the following command:
controller-1# clear statistics
Note: This command is not only limited to switches. It will clear any clearable statistics for every device.

Analytics Export

The following statistics are automatically exported to a connected Analytics Node:

  • CPU queue statistics for every switch.
    Note: This does not include the statistics for queues associated with physical switch interfaces.
  • The PIMU-related debug counters. These are debug counters whose name begins with pimu. No other debug counters are exported.

DMF exports these statistics once every minute.

Note: The exported CPU queue statistics will include port number -2, which refers to the switch CPU's logical port.

Troubleshooting

Use the details with the new show commands to provide more information about the statistics. This information includes timestamps showing statistics collection time and the last time the statistics were cleared.

Use the redis-cli command to query the Redis server on the Analytics Node from the Bash shell on the DMF Controller to view the statistics successfully exported to the Analytics Node.

The following command queries for the last ten exported debug counters:
redis-cli -h analytics-ip -p 6379 LRANGE switch-debug-counters -10 -1
Likewise, to query for the last ten exported CPU queue stats:
redis-cli -h analytics-ip -p 6379 LRANGE switch-queue-stats -10 -1

Limitations

  • Only the CPU queue stats are exported to the Analytics Node. Physical interface queue stats are not exported.
  • Only the PIMU-related debug counters are exported to the Analytics Node. No other debug counters are exported.
  • Only SWL switches are currently supported. EOS switches are not supported.