• Login
Wi-Fi Launchpad
Community Central
English
  • English
  • 日本語
  • 中文
  • 한국어
Arista
  • Solutions
    • AI Networking Center
    • Cloud Networking
    • Cloud-Grade Routing
    • Cognitive Campus Workspaces
    • Electronic Trading
    • Enterprise WAN
    • Federal Government
    • Hybrid Cloud
    • IP Storage and Big Data
    • Media & Entertainment
    • Network Observability
    • Security
    • Telemetry and Analytics
  • Products
    • Product Overview
    • EOS
    • CloudVision
    • Featured Products
    • Featured Platforms
    • Security
    • DANZ Monitoring Fabric
    • Cognitive Wi-Fi
    • Transceivers/Cables
    • Product Families
    • Product Overview
    • EOS Overview
    • CloudVision Overview
    • CloudVision Universal Network Observability (CV UNO)
    • Platforms Overview
    • 7800 Series
    • 7700 Series
    • 7500 Series
    • 7300 Series
    • 7280 Series
    • 7200R Series
    • 7170 Series
    • 7130 Series
    • 7060X Series
    • 7050X Series
    • 7020R Series
    • 7010X Series
    • 700 Series
    • Cognitive Wi-Fi
    • Universal Cloud Networking
    • Hyperscale Data Center
    • Cloud Grade Routing
    • Enterprise WAN
    • Cognitive Campus
    • R-Series Spine & Leaf
    • X-Series Spine & Leaf
    • Programmable
    • 800G Solutions
    • 400G Solutions
    • Flexible 10G & 1G Leaf
    • Network Observability
    • Detection and Response
    • Network Access Control
    • Multi-Domain Segmentation (MSS)
    • Arista NDR
    • Wireless Intrusion Prevention System
    • Edge Threat Management
    • Security Services
    • DMF Overview
    • Multi-Cloud Director Data Sheet
    • 7800R4 Series
    • 7800R3 Series
    • 7700R4 Series Overview
    • 7700R4 Data Sheet
    • 7500R3 Series
    • 7500R Series
    • 7388X5 Series
    • 7368X4 Series
    • 7358X4 Series
    • 7300X3 Series
    • 7280R3 Series
    • 7280R3 Modular Series
    • AWE 7200R Series Overview
    • AWE 7200R Data Sheet
    • 7170 Series Overview
    • 7170 Quick Look
    • 7170 Data Sheet
    • 7170B Data Sheet
    • 7170 Profile Matrix
    • 7130 Series Overview
    • 7130 Hardware
    • 7130 Applications
    • 7130 Developer
    • 7060X6 Series
    • 7060X5 Series
    • 7060X Series
    • 7050X4 Series
    • 7050X3 Series
    • 7050X Series
    • 7020R Series Overview
    • 7020R Quick Look
    • 7020R Data Sheet
    • 7010X Series Overview
    • 7010X Data Sheet
    • 7010X Quick Look
    • 750 Series
    • 722XPM Series
    • 720XP Series
    • 720D Series
    • 710XP Series
    • 710P Series
    • Wi-Fi 7 Series
    • Wi-Fi 6E Series
    • Wi-Fi 6 Series
    • Leaf & Spine
    • Spine & Routing
    • Leaf & Routing
    • Hyperscale Data Center
    • 7700R4 Series
    • 7368X4 & 7060X4 Series
    • 7260X3 Series
    • 7060X6 Series
    • 7060X5 Series
    • 7060X2 & 7060X Series
    • Cloud Grade Routing
    • 7800R4 Series
    • 7800R3 Series
    • 7500R3 Series
    • 7280R3 Series
    • 7020R Series
    • Spine & Edge Routing
    • Wired
    • Wireless
    • Network Access Control
    • R-Series Spine & Leaf
    • 7800R Series
    • 7700R Series
    • 7500R Series
    • 7280R Series
    • 7020R Series
    • X-Series Spine & Leaf
    • 7300X Series
    • 7060X Series
    • 7050X Series
    • Programmable
    • 7170 Series
    • 7130 Series
    • 800G Solutions
    • 7800R4 Series
    • 7700R4 Series
    • 7060X6 Series
    • 400G Solutions
    • 7800R4 Series
    • 7800R3 Series
    • 7700R4 Series
    • 7500R3 Series
    • 7388X5 Series
    • 7368X4 Series
    • 7358X4 Series
    • 7280R3 Series
    • 7060X6 Series
    • 7060X5 Series
    • 7060X4 Series
    • 7050X4 Series
    • Flexible 10G & 1G Leaf
    • 7020R Series
    • 7010X Series
    • 720XP Series
    • Observability Overview
    • DANZ Monitoring Fabric
    • Detection and Response Overview
    • Security Services
    • 7800R4 Series Overview
    • 7800R4 Data Sheet
    • 7800R3 Series Overview
    • 7800R3 Quick Look
    • 7800R3 Data Sheet
    • 7500R3 Series Overview
    • 7500R3 Quick Look
    • 7500R3 Data Sheet
    • 7500R Series Overview
    • 7500R Quick Look
    • 7500R Data Sheet
    • 7388X5 Series Overview
    • 7388X5 Quick Look
    • 7388X5 Data Sheet
    • 7368X4 Series Overview
    • 7368X4 Quick Look
    • 7368X4 Data Sheet
    • 7358X4 Series Overview
    • 7358X4 Quick Look
    • 7358X4 Data Sheet
    • 7300X3 Series Overview
    • 7300X3 Quick Look
    • 7300X3 Data Sheet
    • 7280R3 Series Overview
    • 7280R3 Data Sheet
    • 7280R3 Quick Look
    • 7280R3A Data Sheet
    • 7280R3A Quick Look
    • 7280R3 Modular Data Sheet
    • 7280R3 Modular Quick Look
    • 7130 Hardware Overview
    • 7130 Connect Series
    • 7130E Series
    • 7130L Series
    • 7130LBR Series
    • 7132LB Series
    • 7135LB Series
    • 7130 Applications Overview
    • MetaWatch App
    • MetaMux App
    • MultiAccess App
    • MetaProtect App
    • Exchange App
    • Switch App
    • 7130 Developer Overview
    • IP Cores
    • Development Kits
    • 7060X6 Series Overview
    • 7060X6 Quick Look
    • 7060X6 Data Sheet
    • 7060X5 Series Overview
    • 7060X5 Quick Look
    • 7060X5 Data Sheet
    • 7060X Series Overview
    • 7060X & 7260X Quick Look
    • 7060X & 7260X Data Sheet
    • 7050X4 Series Overview
    • 7050X4 Quick Look
    • 7050X4 Data Sheet
    • 7050X3 Series Overview
    • 7050X3 Quick Look
    • 7050X3 Data Sheet
    • 7050X Series Overview
    • 7050X Quick Look
    • 7050X Data Sheet
    • 750 Series Overview
    • 750 Data Sheet
    • 722XPM Series Overview
    • 722XPM Data Sheet
    • 720XP Series Overview
    • 720XP Data Sheet
    • 720D Series Overview
    • 720D Data Sheet
    • 710XP Series Overview
    • 710XP Data Sheet
    • 710P Series Overview
    • 710P Data Sheet
    • Leaf & Spine
    • R Series
    • X Series
    • 7170 Series
    • Spine & Routing
    • 7800R4 Series
    • 7800R3 Series
    • 7500R3 Series
    • 7280R3 Modular Series
    • 7368X4 Series
    • 7300X3 Series
    • 7300X Series
    • Leaf & Routing
    • 7280R3 Series
    • 7280R3 Modular Series
    • 7260X3 Series
    • 7060X4 Series
    • 7060X2 & 7060X Series
    • 7050X3 Series
    • 7050X Series
    • 7368X4 Series
    • 7060X4 Series
    • 7280R3 Series
    • 7280R3 Modular Series
    • Spine & Edge Routing
    • 7300X Series
    • 7280R3 Series
    • 7050X Series
    • 7020R Series
    • Wired & Wireless
    • 7300X Series
    • 7050X Series
    • 750 Series
    • 720XP Series
    • 722XPM Series
    • 720D Series
    • 710P Series
    • Cognitive Wi-Fi
    • Wi-Fi 6 Series
    • Wi-Fi 6E Series
    • 7800R Series
    • 7800R4 Series
    • 7800R3 Series
    • 7700R Series
    • 7700R4 Series
    • 7500R Series
    • 7500R3 Series
    • 7500R Series
    • 7280R Series
    • 7280R3 Series
    • 7280R3 Modular Series
    • 7020R Series
    • 7020R Series
    • 7300X Series Spine
    • 7388X5 Series
    • 7368X4 Series
    • 7358X4 Series
    • 7300X Series
    • 7060X Series
    • 7060X6 Series
    • 7060X5 Series
    • 7060X4 Series
    • 7060X2 and 7060X
    • 7050X Series
    • 7050X4 Series
    • 7050X3 Series
    • 7050X Series
    • 7130 Series Overview
    • 7130 Hardware
    • 7130 Applications
    • 7130 Developer
  • Partner
    • Partner Program
    • Become a Partner
    • Partner Code of Ethics and Business Conduct
    • Channel Partner Portal
    • Technology Partners
  • Support
    • Support Overview
    • Customer Support
    • Product Documentation
    • Product Certifications
    • Advisories & Notices
    • Product Lifecycle
    • Software Download
    • Transfer of Information
    • Support Portal
    • Training
    • Software Bug Portal
    • CVP Upgrade Path
    • MLAG ISSU Check
    • Tech Library Portal
  • Company
    • Company Overview
    • Corporate Responsibility
    • Management Team
    • Blogs
    • Investor Relations
    • Events Calendar
    • Webinars
    • Video Library
    • Testimonials
    • Careers
    • News
    • Contact Us
  • DANZ Monitoring Fabric User Guide

 
 
 
  • Introduction to DMF and Overview
  • Managing DMF Switches and Interfaces
  • Managing DMF Policies
  • Viewing Information about Monitoring Fabric and Production Networks
  • Using the DMF Service Node Appliance
  • Using the DMF Recorder Node
  • Link Aggregation
  • Tunneling Between Data Centers
  • Integrating vCenter with DMF
  • CloudVision DMF Integration
  • Advanced Fabric Settings
  • Advanced Policy Configuration
  • Stenographer Reference for DMF Recorder Node
  • DMF Recorder Node REST API
  • DMF Controller in Microsoft Azure
  • DMF Controllers in Google Cloud VMware Engine
  • Telemetry Collector
  • DMF GUI REST API Inspector
  • Configuring Third-party Services
  • References
  • .
  • .

Link Aggregation

This chapter describes configuring link aggregation groups between switches, switches, and tools or between switches and taps.

Configuring Link Aggregation

Link aggregation combines multiple LAN links and cables in parallel. Link aggregation provides a high level of redundancy and higher transmission speed.
Note: When connecting a Link Aggregation Group (LAG) to a DMF Service Node appliance, member links can be connected to multiple DMF Service Node appliances with data ports of the same speed.

DMF provides a configurable method of hashing for load distribution among LAG members. The enhanced hashing algorithm automatically assigns the best hashing type for the switch and traffic. This setting allows the manual selection of the packet types and fields used for load distribution among the members of a port-channel interface. Enhanced mode and symmetric hashing are enabled by default for the supported switch platforms. With symmetric hashing, bidirectional traffic between two hosts going out on a port channel is distributed on the same member port.

The default hashing option uses the best available packet header field that applies to each packet, and that is supported by the switch. These fields can include the following:
  • IPv4
  • IPv6
  • MPLS (disabled by default)
  • L2GRE packet
If none of these headers apply, DMF uses Layer-2 header fields (source MAC address, destination MAC address, VLAN-ID, and ethertype) to distribute traffic among the LAG member interfaces. Hashing on the following packet header fields is enabled by default:
  • hash l2 dst-mac eth-type src-mac vlan-id
  • hash ipv4 dst-ip src-ip
  • hash ipv6 dst-ip src-ip
  • hash l2gre inner-l3 dst-ip src-ip
  • hash symmetric
Note: DMF treats VN-tagged and QinQ packets as L2 packets and uses Layer-2 headers to distribute traffic among LAG member interfaces for these packets.

Using the CLI to Configure Link Aggregation Groups

  1. Use the lag-interface command to enter the config-switch-lag-if submode to define the LAG member interfaces and specify the type of load distribution (hashing) to use for the LAG.
  2. Use the member command to add an interface to a LAG. Enter this command for each interface to add to the LAG. To remove an interface, use the no member version of the command.
    For example, the following commands add two interfaces to a LAG named my-lag.
    controller-1(config)# switch DMF-FILTER-SWITCH-1
    controller-1(config-switch)# lag-interface mylag
    controller-1(config-switch-lag-if)# member ethernet13
    controller-1(config-switch-lag-if)# member ethernet14
  3. To configure multiple delivery interfaces as a LAG, complete the following steps:
    1. Assign a name to the LAG and enter the config-switch-lag-if submode.
      controller-1(config)# switch DMF-DELIVERY-SWITCH-1
      controller-1(config-switch)# lag-interface lag1
      controller-1(config-switch-lag-if)#
    2. Assign members to the LAG.
      controller-1(config-switch-lag-if)# member ethernet39
      controller-1(config-switch-lag-if)# member ethernet40
  4. To configure a core LAG apply the membership configuration to both ends of the connection:
    controller-1(config)# switch SWITCH-1
    controller-1(config-switch)# lag-interface core-link
    controller-1(config-switch-lag-if)# member ethernet1
    controller-1(config-switch-lag-if)# member ethernet3
    controller-1(config)# switch SWITCH-2
    controller-1(config-switch)# lag-interface core-link
    controller-1(config-switch-lag-if)# member ethernet49
    controller-1(config-switch-lag-if)# member ethernet51
    Note: Ensure that when you set up a core LAG between DMF fabric switches it is properly configured on both sides of the connection; otherwise, you will get a lag_misconfiguration error state. You need to configure link aggregation symmetrically on both ends of a connection for it to work properly.
  5. To view the configured LAGs, enter the show lag command, as in the following example:
    controller-1> show lag
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Lag Interfaces~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    # Switch LAG State Rx Rate Pkt Rate Peak Rate Peak Pkt Rate TX Rate Pkt Rate Peak Rate Peak Pkt Rate 
    -|--------|---------|-----|-------|--------|---------|-------------|-------|--------|---------|-------------|
    1 SWITCH-1 core-link up720bps02.59Gbps608181184bps053.2Mbps37150
    2 SWITCH-2 core-link up184bps0424bps0 720bps01.08Kbps1
    
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Member Interfaces~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    # Switch LAG Member State Fault Rx Rate Pkt Rate Peak Rate Peak Pkt Rate Tx Rate Pkt Rate Peak Rate Peak Pkt Rate 
    -|--------|---------|----------|-----|-----|-------|--------|---------|-------------|-------|--------|---------|-------------|
    1 SWITCH-1 core-link ethernet1up624bps0800bps1 88bps 0152bps0
    2 SWITCH-1 core-link ethernet3up96bps 018.5Mbps173588bps 0136bps0
    3 SWITCH-2 core-link ethernet49 up88bps 0152bps0 624bps0800bps1
    4 SWITCH-2 core-link ethernet51 up88bps 0280bps0 96bps 018.5Mbps1735

Using the GUI to Configure Link Aggregation Groups

Overview

A new DMF LAGs Page introduces an improved workflow and additional functionality.

Navigate to Fabric > LAGs.

Main Page

The main page contains four components:

  • LAG Information
  • LAG Alerts
  • Switch LAG Enhanced Hash Settings
  • LAG Members Utilization

LAG Information

The LAG Information table displays all the LAGs configured in DMF and each member's status. A row contains the LAG Name, Switch Name, and Member Status.

Hover over a member under Status to view the status of each member interface.

Use the checkbox to Edit or Delete a LAG entry. Select the LAG Name link to open the properties tab of the LAG details panel.

Create or Edit LAG

Select Create LAG to open the configuration panel.

Enter a LAG Name and the Switch Name from the drop-down list, and choose the member Interfaces from that switch.

As an option, set the Minimum Links configuration required for the LAG to stay up. Refer to Minimum Link and Activation Configuration for Lag Interfaces for more information.

Select Submit to create the LAG.

Tip: If no configured LAGs exist, select Create LAG on the main page.

To edit a LAG, choose it in the Information table and select Edit.

While the LAG Name and switch name are not editable, modifying member Interfaces and Minimum Links values is.

Only one LAG can be edited at a time; however, the system supports deleting multiple LAGs simultaneously.

Select Submit to commit the changes to the LAG.

LAG Alerts

LAG Alerts displays all LAG Alerts and LAG-related Switch Alerts. The Switch or LAG tag indicates the type of alert. Expand each item to view the alert description and the number of occurrences.

LAG Enhanced Hash Grid

The Switch Lag Enhanced Hash Settings grid displays a switch's current LAG-enhanced hash settings across various hash fields and categories. If a cell is gray, there are no currently active sub-settings. Hover over the blue cell to view active fields. Select a switch name link to open the Enhanced Hash Configuration tab of the LAG Details pane.

LAG Members Utilization

The LAG Members Utilization table summarizes the utilization for each LAG.

In the Utilization Range column, a colored bar indicates the start of the minimum utilization and the end of the maximum utilization of its members. Hover over the bar to view the Speed, minimum bit Rate, and maximum bit Rate of the LAG for each direction. The bar changes color based on LAG alerts:

  • Green – indicates no alerts.
  • Yellow – only warnings.
  • Red – there is more than one alert.

Select the LAG Name link to open the Utilization tab of the LAG Details pane.

Details Pane

Select a LAG or switch link to open the details pane. Choose a LAG to view its details. When choosing a switch link in the Enhanced Hash grid, the first LAG on that switch is selected. If there isn't a LAG on the switch, the system displays an empty string, and the Enhanced Hash opens for that switch. The Properties and Utilization tab will be disabled. Select a different LAG to access the other tabs.

Properties

The LAG Name, Switch Name, Members tags, and Minimum Link values appear in the descriptions section. The Members Configuration table shows the configuration of each member interface. Use Show/Hide Columns to determine which columns to display.

Utilization

Utilization displays the utilization statistics for the LAG.

There are two time-series charts for RX and TX stats. These charts are updated every 10 seconds and display each LAG member's bit-rate / packet rate. For LAGs with over five members, select Top 5 or Bottom 5 from the All drop-down in the chart to display only the top or bottom five interfaces.

Alternate between Bytes and Packets to change the statistics displayed. Alternating between bytes and packets will not reset the time-series chart since both bytes and packet data are polled every 10 seconds.

Select Clear/Reset LAG Stats to re-initialize the chart. Selecting a new LAG re-initializes the charts, and polling begins for new LAG utilization data.

Enhanced Hash Configuration

Before the DMF 8.6 release, the LAG Enhanced Hash configuration occurred on the Switches page. Now, it is done in the LAGs Detail pane.

Select Unlock to Edit to enable changing configuration settings and Save Changes to submit the changes.

The LAGs detail pane closes, and the updated settings appear in the Switch Lag Enhanced Hash Settings grid table.

Each column lists sub-settings for L2, L2 GRE Inner L2, and other configurations.

Select a link in Switch Lag Enhanced Hash Settings to turn these on and off, depending on the data required. There are validations to ensure the correct combination of settings (such as L2 GRE Inner L2 and L2 GRE Inner L3 cannot be set at the same time) that will appear in an AlertMessageList in the pane. The configuration must pass all validations to submit the changes successfully.

The Symmetric field has three options: Default, Enabled, and Disabled.

  • Disable the Symmetric field for the GTP Match settings.
  • Enter First Byte and First Byte Mask as hex values 0–255 with up to four Port Match Entries.
  • Each port match entry must have a port combo and number values for the corresponding ports.

When setting the Port Combo to AND or OR, both Src Port and Dst Port must have non-zero values.

Minimum Link and Activation Configuration for LAG Interfaces

When some LAG member links go down, it may be preferable to isolate the filter switch by bringing down the entire LAG interface rather than delivering unreliable data to tools and devices.

Two additional commands are part of the DANZ Monitoring Fabric (DMF) lag-interface configuration to aid in managing the LAG interface when a specified number of links go down. These commands are:

  • minimum-link
  • activate

minimum-link

The minimum-link command configures the minimum number of links that must be up for a functional LAG. If the total number of active links in the interface-group is less than the configured minimum threshold, DMF brings down all active links belonging to the interface group.

The minimum-link command is optional, with a default value of 0. You should ensure this value is reasonable. If the value exceeds the total member links, the LAG will always be down.

activate

You must use the activate command to manually try to bring up the LAG if it was shut down because the number of active links was below the minimum link.

Note: If you make any changes to fix the downed LAG members and subsequently use the activate command to re-enable the LAG, it will go back down if the active members are still less than the minimum links value.

Configuration

The minimum-link and activate commands are available within a switch's lag-interface submode. The following example illustrates configuring these options:

Use the CLI to enter the switch's submode.

  1. Enter the lag interface submode by using the lag-interface command.
  2. Use the minimum-link command to configure the minimum threshold value needed before bringing down a LAG.
  3. Use the activate command to bring back up the previously shutdown LAG (due to having fewer active member links than the minimum threshold).
dmf-controller-1(config)# switch s1
dmf-controller-1(config-switch)# lag-interface lag1
dmf-controller-1(config-switch-lag-if)# minimum-link 2
dmf-controller-1(config-switch-lag-if)# activate

Show Commands

While in the switch and lag-interface submodes, the show running-config and show this commands list the configured minimum-link threshold value. However, since activate is an action command rather than a configuration value, it doesn't appear when running the show commands.

dmf-controller-1# show running-config switch core1 lag lag1 
! switch
switch core1
 !
 lag-interface lag1
 member a
 member b
 member c
 member d
 minimum-link 2

Troubleshooting

  • If a LAG always shows down, ensure the minimum-link value is not greater than the total number of member links in that LAG.

Limitations

  • The system doesn’t provide any warning when the minimum-link value exceeds the total number of member links in that LAG.

Tunnel Endpoint for vCenter Integration

This feature supports using Link Aggregation Group (LAG) in the tunnel endpoint configuration and runs on DANZ Monitoring Fabric (DMF) compatible switches supporting LAGs. Refer to the DMF 8.6 Hardware Compatibility Guide to view the list of compatible switches.

Configuration

Add the configured LAG as an interface in the tunnel endpoint configuration, as shown in the following example.

dmf-controller-1(conf)# tunnel-endpoint tep1 switch swl-leaf-RU35 lag1 ip-address 192.168.199.254 mask 255.255.255.0 gateway 192.168.199.1

Show Commands

Review the tunnel endpoint configuration using the following command:

dmf-controller-1# show running-config tunnel-endpoint

! tunnel-endpoint
tunnel-endpoint tep1 switch swl-leaf-RU35 lag1 ip-address 192.168.199.254 mask 255.255.255.0 gateway 192.168.199.1

Limitations

  • Do not use LAG members as a Tunnel endpoint interface.
  • Do not configure multiple gateways for the same interface used in Tunnel endpoints.
  • Do not use a LAG configured as management or DMF interface (filter or delivery) in the Tunnel endpoint configuration.

Configuring Hashing Fields

To configure the hashing fields manually via the CLI, use the lag-enhanced-hash command to enter config-switch-hash mode as in the following example:

controller1(config)# switch DMF-FILTER-SWITCH-1
controller1(config-switch)# lag-enhanced-hash
controller1(config-switch-hash)#
The hash commands have the following syntax.
  • To hash on GTP fields, use one of the following options:
    controller1(config-switch-hash)# hash gtp
    header-first-byte Configure fields to identify GTP traffic
    port-match Configure UDP tunnel port match entry
  • To hash on IPv4 fields, use one of the following options:
    controller1(config-switch-hash)# hash ipv4
    <cr>
    dst-ip Destination IPv4 address (optional)
    l4-dst-port TCP/UDP destination port (optional)
    l4-src-port TCP/UDP source port (optional)
    protocol IP protocol (optional)
    src-ip Source IPv4 address (optional)
    vlan-id Vlan Id (optional)
  • To hash on IPv6 fields, use one of the following options
    controller1(config-switch-hash)# hash ipv6
    <cr>
    dst-ip Collapsed destination IPv6 address (optional)
    l4-dst-port TCP/UDP destination port (optional)
    l4-src-port TCP/UDP source port (optional)
    nxt-hdr Next Header (optional)
    src-ip Collapsed source IPv6 address (optional)
    vlan-id Vlan Id (optional)
  • To hash on Layer-2 fields, use one of the following options:
    controller1(config-switch-hash)# hash l2
    dst-mac Destination xMAC address
    eth-type Ethernet Type
    src-mac Source MAC address
    vlan-id Vlan Id
  • To hash on L2GRE fields, use one of the following options:
    controller1(config-switch-hash)# hash l2gre
    inner-l2 Use inner L2 fields for hash computation (optional)
    inner-l3 Use inner L3 fields for hash computation (optional)
  • To hash on MPLS labels, use one of the following options:
    controller1(config-switch-hash)# hash mpls
    <cr>
    label-1 Lower 16 bits of MPLS label 1 (optional)
    label-2 Lower 16 bits of MPLS label 2 (optional)
    label-3 Lower 16 bits of MPLS label 3 (optional)
    label-hi-bits Higher 4 bits of MPLS Labels 1,2 and 3 (optional)
  • To manually configure the hash seeds:
    controller1(config-switch)# hash seeds
    <First hash seed> Configure seed1 for hash computation
    controller1(config-switch-hash)# hash seeds 3809
    <cr>
    <Second hash seed> Configure seed2 for hash computation (optional)
    controller1(config-switch-hash)# hash seeds 3809 90901
    <cr>
  • To enable or disable symmetric hashing:
    controller1(config-switch-hash)# hash symmetric
    <cr>
    disable Disable symmetric hashing
    enable Enable symmetric hashing

L2 GRE Key Hashing

The L2 GRE Key-based hashing feature allows the L2 GRE packets to hash based on the L2 GRE (Tunnel) Key on Core DMF switches.

Previously, L2 GRE payload-based hashing (InnerL2 or InnerL3) applied only to L2 GRE packets terminated at DMF delivery or filter switches. If a user wanted to hash L2 GRE packets transiting a DMF core switch, the L2 GRE payload-based hashing across port-channel interfaces would not have been functional as the L2 GRE tunnel was not terminating on the core DMF switch.

With the L2 GRE Key-based hashing feature, users can now hash L2 GRE packets based on the L2 GRE Key on core DMF switches.
Note: The L2 GRE Key-based hashing feature applies to switches running SWL OS and does not apply to switches running EOS.

CLI Configuration

L2 GRE Key-based hashing is supported only for the IPv4-based packets with L2 GRE payload. This feature does NOT support the IPv6 packets with L2 GRE payloads.

Enable the L2 GRE Key hashing by setting the l2-gre-key parameter, as shown in the following example.

Controller-Active# show running-config switch DMF-SWITCH-1
! switch
switch DMF-SWITCH-1
mac c0:d6:82:17:fd:5a
!
lag-enhanced-hash
hash ipv4 l2-gre-key
hash symmetric disable

GUI Configuration

  1. Configure the L2 GRE Key Hashing in the UI for a switch in the Fabric > Switches page using the table row menu action Configure option.
    Figure 1. Fabric Switch Configure Menu
  2. Enable the L2 GRE Key for the IPv4 packets in the LAG Enhanced Hash step.
    Note: The L2 GRE Key is unsupported for IPv6 and VXLAN Inner L3.
    Figure 2. Configure Switch L2 GRE Key
  3. Click the Submit button to save the configuration.

CLI Commands

Use the following CLI commands to verify settings and troubleshoot any issues that may arise.

# show lag-enhanced-hash

While logged into a switch, use the following commands to troubleshoot this feature.

root@DMF-SWITCH-1:~# ofad-ctl gt PORT_CHANNEL_ENHANCED_HASH_FIELD
Hash Field Configs:
-------------------
Symmetric Hashing:
Disabled
L2GRE Key Hashing:
Enabled
L2 Fields:
IPv4 Fields:
DSTL4 SRCL4
IPv6 Fields:
MPLS Fields:
L2GRE L2 Fields:
L2GRE L3 Fields:
VXLAN L2 Fields:
VXLAN L3 Fields:

root@DMF-SWITCH-1:~# ofad-ctl bshell getreg RTAG7_HASH_CONTROL_L2GRE_MASK_A
RTAG7_HASH_CONTROL_L2GRE_MASK_A.ipipe0[1][0x6a001900]=0xffffffff
: <L2GRE_TUNNEL_GRE_KEY_MASK_A=0xffffffff>

root@DMF-SWITCH-1:~# ofad-ctl bshell getreg RTAG7_HASH_CONTROL_L2GRE_MASK_B
RTAG7_HASH_CONTROL_L2GRE_MASK_B.ipipe0[1][0x6a001a00]=0xffffffff
: <L2GRE_TUNNEL_GRE_KEY_MASK_B=0xffffffff>

root@s5248f-1:~# ofad-ctl bshell getreg RTAG7_L2GRE_PAYLOAD_L2_HASH_FIELD_BMAP
RTAG7_L2GRE_PAYLOAD_L2_HASH_FIELD_BMAP.ipipe0[1][0x6a001b00]=0: <
L2GRE_PAYLOAD_L2_BITMAP_B=0,L2GRE_PAYLOAD_L2_BITMAP_A=0>

root@s5248f-1:~# ofad-ctl bshell getreg RTAG7_L2GRE_PAYLOAD_L3_HASH_FIELD_BMAP
RTAG7_L2GRE_PAYLOAD_L3_HASH_FIELD_BMAP.ipipe0[1][0x6a001c00]=0: <
L2GRE_PAYLOAD_L3_BITMAP_B=0,L2GRE_PAYLOAD_L3_BITMAP_A=0>
root@DMF-SWITCH-1:~# 
Note: The L2GRE_KEY offset is the same as the SRCL4 and DSTL4 offset in hardware. Hence, the hardware requires setting SRCL4 and DSTL4 hash fields and the L2GRE_KEY hash field to hash the packets using the L2GRE_KEY.

VXLAN Hashing

VXLAN hashing enables hashing on a VXLAN payload, including hashing on the Inner L3 Source IP, Inner L3 Destination IP, Inner L2 Source MAC, and inner L2 Destination MAC. This only applies to terminated cases.

Symmetric hashing works with VXLAN packet Inner L3 Source IP/Destination IP, Inner L4 Source Port/Destination Port, and Outer L3 Source IP/Destination IP.

Note: VXLAN hashing applies to switches running SWL OS.

CLI Configuration

VXLAN hashing includes hashing on L2 and L3 and the setting of at least one parameter enabled under the switch construct on Controller CLI:
# lag-enhanced-hash
hash vxlan inner-l2 dst-mac 
hash vxlan inner-l3 dst-ip

UI Configuration

  1. Configure the VXLAN Hashing in the UI for a switch in the Fabric > Switches page using the table row menu action Configure option.
    Figure 3. Fabric Switch Configure Menu
  2. In the LAG Enhanced Hash step, configure the following fields depending on your requirements:
    • L2 VxLAN Inner L2 fields
    • VxLAN Inner L3 fields
    Note:
    • L2 GRE Key is not supported for VXLAN hash fields.
    • Cannot simultaneously specify enhanced hash for L2 GRE Inner L2 and Inner L3.
    • Cannot simultaneously specify enhanced hash for VXLAN Inner L2 and Inner L3.
    Figure 4. Configure Switch LAG Enhanced Hash
  3. Click the Submit button to save the configuration.

CLI Commands

Use the following CLI commands to verify settings and troubleshoot any issues that may arise.

# show lag-enhanced-hash

Use the following commands to troubleshoot this feature. For example, when the hashing happens on VXSLAN payload inner L3 Src IP.

root@DMF-SWITCH-1:~# root@mrv1:~# ofad-ctl gt PORT_CHANNEL_ENHANCED_HASH_FIELD
Hash Field Configs:
-------------------
Symmetric Hashing:
Disabled
L2 Fields:
IPv4 Fields:
IPv6 Fields:
MPLS Fields:
L2GRE L2 Fields:
L2GRE L3 Fields:
VXLAN L2 Fields:
VXLAN L3 Fields:
IP4SRC_LO IP4SRC_HI

root@mrv1:~# ofad-ctl bshell getreg RTAG7_HASH_CONTROL_4
RTAG7_HASH_CONTROL_4.ipipe0[1][0x6a000700]=3:
<VXLAN_PAYLOAD_HASH_SELECT_B=1,VXLAN_PAYLOAD_HASH_SELECT_A=1,DISABLE_HASH_VXLAN_B=0,DISABLE_HASH_VXLAN_A=0>

root@mrv1:~# ofad-ctl bshell getreg RTAG7_VXLAN_PAYLOAD_L2_HASH_FIELD_BMAP
RTAG7_VXLAN_PAYLOAD_L2_HASH_FIELD_BMAP.ipipe0[1][0x6a001d00]=0: <
VXLAN_PAYLOAD_L2_BITMAP_B=0,VXLAN_PAYLOAD_L2_BITMAP_A=0>

root@mrv1:~# ofad-ctl bshell getreg RTAG7_VXLAN_PAYLOAD_L3_HASH_FIELD_BMAP
RTAG7_VXLAN_PAYLOAD_L3_HASH_FIELD_BMAP.ipipe0[1][0x6a001e00]=0x1800c00
: <VXLAN_PAYLOAD_L3_BITMAP_B=0xc00,VXLAN_PAYLOAD_L3_BITMAP_A=0xc00>

VXLAN LAG Hashing on DCS7280 Platforms

The following information addresses Virtual Extensible LAN (VXLAN) hashing capabilities and behavior, specifically on the DCS-7280 platforms.

Platform Compatibility

There are two use cases:

  1. Decapsulation – Strips the VXLAN header.
  2. Transit – Retains the VXLAN header.

Since VXLAN Header Stripping is only supported on DCS-7280R3 platforms, the Decapsulation use case is supported only on these platforms.

The Transit use case is supported on all DCS-7280 platforms.

Configuration

There are two assumptions:

  1. The VXLAN packet is IPv4 (outer).
  2. The VXLAN packet has a UDP destination port value matching the configured value (default – 4789).

For the decapsulation use case, the packet context is advanced to the start of the payload upon VXLAN parsing on ingress. Hence, you can configure normal (outer) hash fields to hash against the VXLAN payload (inner) as if the outer encapsulation has already been discarded.

Traffic is hashed based on the outer header for the transit use case, just like any non-VXLAN UDP packet. Arista advises configuring L4-src-port hashing to hash against the UDP source port of the VXLAN packet, typically used as an entropy of its payload.
> enable
# config
(config)# switch switch-name
(config-switch)# lag-enhanced-hash

Limitations

  • Only IPv4 VXLAN tunnel header stripping is supported.
  • Currently, it is not possible to limit VXLAN packet parsing to selected interfaces on a given switch. Configuring strip-vxlan on any switch interface will trigger VXLAN packet parsing to be active globally on the switch. In this situation, the packet context of any ingressed VXLAN packet would be advanced to the start of its payload. As a result, policy matching and hashing behaviors will be generally affected for VXLAN packets on this switch, even for packets not subject to strip-vxlan.

MLAG RTAG7 Hash Computation for Unbalanced Load Balancing

In DANZ Monitoring Fabric (DMF), all SWL OS switches with the same underlying ASIC have the same RTAG7 hash parameters configured by default. After configuring MLAG to load balance traffic from filter to delivery switches, and when the number of interfaces in the MLAG and LAG in the delivery switch is the same, the traffic will not be load balanced in the delivery switch because of hash polarization.

Use the feature to avoid LAG hash polarization by choosing different hash algorithms in separate switches. The other option is using a different packet field set in each switch.

The feature supports using different hash algorithms by choosing different hash seed values.

Note: MLAG RTAG7 hash computation for unbalanced load balancing only applies to switches running SWL OS, does not apply to switches running EOS and is only supported on select Broadcom® switch ASICs that run SWL OS.

CLI Configuration

Configure the feature on each switch using the following CLI commands:
(config)# switch S4048T
(config-switch)# lag-enhanced-hash
(config-switch)# hash symmetric enable
(config-switch-hash)# hash seeds 1234 3456
(config-switch-hash)# hash l2 dst-mac
(config-switch-hash)# hash l2 src-mac
(config-switch-hash)# hash l2 eth-type
(config-switch-hash)# hash l2 vlan-id
Use the following show command to view the configured hash seed.
R450-C1# show lag-enhanced-hash switch S4048T
~~~~~~~~~~~~~~~~~~~~~~~~~ L2 Enhanced Hash~~~~~~~~~~~~~~~~~~~~~~~~~
# Switch DPID Dst mac Eth type Src mac Vlan id Seed1 Seed2 Symmetric
-|-----------|-------|--------|-------|-------|-----|-----|---------|
1 S4048TTrueTrue TrueTrue12343456True
Note: The CLI example above illustrates an L2 configuration.

GUI Configuration

From the home page, select the switches using Fabric > Switches.

A list of available switches displays.

Choose the required switch for configuring the hash seeds. Hash seeds are integers used to generate random numbers in the RTAG7 hash algorithm.

Select Actions from the menu.

On the Configure Switch page, select LAG Enhanced Hash.

Toggle on the following:
  • Options
    • Symmetric
  • L2 Fields
    • Src. MAC
    • Dst. MAC
    • Ether-Type
    • VLAN ID
Enter the hash Seed 1 and Seed 2 values in the LAG Enhanced Hash window.
Click Submit.
Note: The GUI example above illustrates an L2 configuration.

Optional: View the configured hash seed using the CLI and the following show command.

R450-C1# show lag-enhanced-hash
~~~~~~~~~~~~~~~~~~~~~~~~~~~ L2 Enhanced Hash~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Switch DPID Dst mac Eth type Src mac Vlan id Seed1 Seed2 Symmetric
-|---------------|-------|--------|-------|-------|-----|-----|---------|
1 A-7050SX3-48YC8 TrueTrue TrueTrue12343456True

Troubleshooting a Configured Hash Seed

Use the following procedure to ensure the configured hash seed is correctly applied on the switch.
  1. Log in to the switch using the connect switch switch_name command.
  2. Use the ofad-ctl gt PORT_CHANNEL_ENHANCED_HASH_SEED command to print the seed values and the hash algorithm.
  3. Ensure the hash algorithms differ in the switches where hash polarization occurs.
  4. Configure the hash seeds such that the hash algorithms are different.
Example
R450-C1(config)# connect switch A-7050SX3-48YC8
Switch Light OS SWL-OS-DMF-8.6.x(0), 2023-12-07.09:17-42d8658
Linux A-7050SX3-48YC8 4.19.296-OpenNetworkLinux #1 SMP Thu Dec 7 09:28:30 UTC 2023 x86_64
Switch Light ZTN Manual Configuration. Type help or ? to list commands.
(ztn-config) debug bash
*****************************WARNING******************************
Any/All activities within bash mode are UNSUPPORTED
This is intended ONLY for additional debugging ONLY by Arista TAC.
Please type "exit" or Ctrl-D to return to the CLI
*****************************WARNING******************************
root@A-7050SX3-48YC8:~# ofad-ctl gt PORT_CHANNEL_ENHANCED_HASH_SEED
GENTABLE : port_channel_enhanced_hash_seed
GENTABLE ID : 0x0006
hash_seed_valid: true
HASH_SEED1=0x000004d2
HASH_SEED2=0x000004d2
computed hash algorithm from seed: CRC32LO
root@A-7050SX3-48YC8:~#

Pseudo Multi-Chassis Link Aggregation

DMF supports Link Aggregation Groups (LAGs) that allow 2 or more physical interfaces on the same DMF switch to be aggregated into 1 logical interface to increase the aggregate bandwidth and provide link redundancy against link failure. This feature works well if all the tools connect to the same DMF delivery switch, typically when co-locating customer tools in the same physical location. However, in cases where tools reside in different data centers or physical locations, where a single DMF switch cannot connect to all the tools, load balancing across two DMF delivery switches is required.

A pseudo-multi-chassis Link Aggregation Group (MLAG) provides redundancy for each delivery switch connected to a multi-homed tool. With MLAG, traffic is hashed on the upstream DMF switch across two active-active links toward the delivery switches. If one of the switches fails, the traffic will fail over to the healthy switch.

MLAG Components

  • MLAG Domain: An MLAG domain is a logical grouping of two delivery switches that will participate in an MLAG.
  • Peer Switch: Member switches added into the MLAG domain.
  • MLAG Interface: An MLAG interface, configured under the MLAG domain, is a logical binding of two physical interfaces or LAG interfaces, one from each peer switch.
  • Core MLAG Link: A fabric-facing MLAG link. A core switch LAG interface, whose members connect to the two peer switches participating in the MLAG domain.
  • Delivery MLAG Link: An MLAG interface that is assigned the delivery interface role. This interface is used in a policy as a delivery interface.
  • MLAG Member Interface: A physical interface or a LAG interface added into an MLAG interface.
  • DMF Policy: A user-configured DMF policy that contains at least one MLAG delivery interface.
  • Dynamic MLAG Domain Policy: Dynamically configured policies that follow the naming convention _mlag_<DMF- policy>_<DeliverySwitch>. For one user-configured MLAG policy, a policy that uses at least one MLAG delivery interface, two dynamic MLAG domain policies are created, one for each peer switch.

MLAG Limitations

  • An MLAG domain cannot have more than two switches.
  • A switch can only be a part of one MLAG domain.
  • An MLAG interface can only have two member interfaces.
  • An MLAG interface can only have one interface (physical interface or LAG interface) from each peer switch.
  • Tunnel interfaces are not supported as members in MLAG interface configuration.

Configuring an MLAG via the CLI

To configure an MLAG, use the following steps:
  1. Configure an MLAG domain by specifying an alias, and add peer switches that will be participating in the MLAG.
    Controller-1(config)# mlag-domain MLAG-Domain1
    Controller-1(config-mlag-domain)# peer-switch DeliverySwitch-1
    Controller-1(config-mlag-domain)# peer-switch DeliverySwitch-2
  2. Configure the core MLAG interface.
    Controller-1(config-mlag-domain)# mlag-interface MLAG-Core-Intf
    Controller-1(config-mlag-domain-if)# member switch DeliverySwitch-1 interface ethernet50
    Controller-1(config-mlag-domain-if)# member switch DeliverySwitch-2 interface ethernet50

    The above MLAG interface configuration selects one physical interface from each peer switch added into the MLAG domain. This MLAG interface is fabric-facing, which means that ethernet50 of DeliverySwitch-1 and ethernet50 of DeliverySwitch-2 are connected to the DMF core switch, where traffic hashing is performed.

  3. Configure the core LAG interface, a LAG interface on the core switch. The members of the LAG interface are connected to the peer switches in the MLAG domain. This configuration ensures that the traffic will be hashed toward the two connected delivery switches.
    Controller-1(config)# switch CoreSwitch-1
    Controller-1(config-switch)# lag-interface Core-LAG
    Controller-1(config-switch-lag-if)# member ethernet10
    Controller-1(config-switch-lag-if)# member ethernet20
  4. Configure the delivery MLAG interface by specifying an interface alias and selecting one member from each delivery switch.
    Controller-1(config-mlag-domain-if)# mlag-interface MLAG-Del-Intf
    Controller-1(config-mlag-domain-if)# member switch DeliverySwitch-1 interface ethernet1
    Controller-1(config-mlag-domain-if)# member switch DeliverySwitch-2 interface ethernet1
    Controller-1(config-mlag-domain-if)# role delivery interface-name MLAG-Tool-1

    The above MLAG interface configuration selects one physical interface from each peer switch added into the MLAG domain. The members of this MLAG interface, ethernet1 of DeliverySwitch-1 and ethernet1 of DeliverySwitch-2, are connected to multi-homed tools. Note that unlike the core MLAG interface, the delivery MLAG interface is assigned the delivery role and its interface name is configured, so that it can be used in DMF policies as a delivery interface.

  5. Configure a DMF policy by following the procedure shown below:
    Controller-1(config)# policy Policy-1
    Controller-1(config-policy)# action forward
    Controller-1(config-policy)# 1 match any
    Controller-1(config-policy)# filter-interface Filter-1
    Controller-1(config-policy)# delivery-interface MLAG-Tool-1
    The above policy is configured using the MLAG-Tool-1 interface configured in Step 4. Configuring the policy to use an MLAG delivery interface will result in two dynamic policies, one for each peer switch. Refer to the following topology for the policy breakdown.
    Figure 5. MLAG Policy Breakdown
As seen in the topology above:
  • The user-configured policy delivers traffic from the filter switch to the core switch LAG interface.
  • Dynamic Policy 1 delivers traffic to delivery switch 1.
  • Dynamic Policy 2 delivers traffic to delivery switch 2.
The following output displays the three policies as configured on the DMF controller:

Below are the details for each policy:

Policy: Policy-1 Interfaces
  • Filter Interface(s) section lists the filter interface configured for the policy, Policy-1.
  • Core Interface(s) section lists the interfaces that connect the filter switch and the core switch selected for the policy.
  • MLAG Core Interface(s) section displays the core LAG interface that hashes the traffic towards the peer switches.
  • MLAG Delivery Interface(s) section lists the delivery MLAG interface members.
Policy: _mlag_Policy-1_DeliverySwitch-1 Interfaces
  • Filter Interfaces(s) section lists the dynamically configured interface name on DeliverySwitch1 to which the core switch is connected.
  • MLAG Delivery Interface(s) section lists the delivery MLAG interface member on DeliverySwitch1.
Policy: _mlag_Policy-1_DeliverySwitch-2 Interfaces
  • Filter Interfaces(s) section lists the dynamically configured interface name on DeliverySwitch2 to which the core switch is connected.
  • MLAG Delivery Interface(s) section lists the delivery MLAG interface member on DeliverySwitch2.

MLAG Link Discovery

Link Layer Discovery Protocol (LLDP) is used to discover MLAG links. When the DMF Controller receives an LLDP message, it looks for the switch and interface names. If the switch is a part of an MLAG domain, and the reported interface corresponds to the MLAG interface, then it is classified as an MLAG link.

Controller-1(config)# show link all link-type mlag-member
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Links ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Active State Src switchSrc IF Name Dst switchDst IF Name Link Type Since
-|------------|-----------------|-----------|-----------------|-----------|-----------|-----------------------|
1 active CoreSwitch-1ethernet10DeliverySwitch-1ethernet50mlag-member 2022-11-11 21:54:28 UTC
2 active CoreSwitch-1ethernet20DeliverySwitch-2ethernet50mlag-member 2022-11-11 21:54:28 UTC
3 active DeliverySwitch-1ethernet50CoreSwitch-1ethernet10mlag-member 2022-11-11 21:54:28 UTC
4 active DeliverySwitch-2ethernet50CoreSwitch-1ethernet20mlag-member 2022-11-11 21:54:13 UTC
Controller-1(config)#

Configure MLAG via GUI

To configure an MLAG domain from the GUI, go to the Fabric > MLAGs tab.

Figure 6. Fabric –> MLAGs page
Figure 7. Create MLAG page

Click on Create MLAG Domain and enter the following:

  • Domain Name: Enter the MLAG domain alias.
    • Peer Switch 1: From the drop-down, select the first switch that will be participating in the MLAG domain.
    • Peer Switch 2: From the drop-down, select the second switch that will be participating in the MLAG domain.
  • MLAG Interfaces: Enter an alias for the fabric-facing MLAG interface. This interface connects the core switch to the peer switches in the MLAG domain.
    • Peer Switch 1 and Peer Switch 2: After selecting peer switches under the domain name, the peer switches under the MLAG interface will automatically be selected.
    • Interface 1: Select the member interface that connects the core switch to DeliverySwitch-1
    • Interface 2: Select the member interface that connects the core switch to DeliverySwitch-2.
  • MLAG Delivery Interfaces: Enter an alias for each MLAG delivery interface.
    • DMF Interface Name: Enter the DMF interface name for the MLAG delivery interface. This alias will be used to identify the delivery interface while configuring the DMF policy.
    • Strip VLAN on Egress: Select the strip VLAN configuration for the MLAG delivery interface
    • Peer Switch 1 and Peer Switch 2: These will be automatically selected based on the peer switches selected under the domain name.
    • Interface 1: Select the member interface on Peer Switch 1.
    • Interface 2: Select the member interface on Peer Switch 2.
Click Create to save the configuration.
Figure 8. MLAG Domain State
Figure 9. MLAG Domain Expanded View

The above screenshot displays the MLAG domain status. Click + to expand each MLAG interface's domain configuration and status of each MLAG interface.

Create MLAG Policy from GUI

To configure an MLAG policy, go to the Monitoring > Policies page.
Figure 10. Configure MLAG Policy

Select + Create Policy to add a new policy.

Click + Add Ports in Destination Tools and select or drag the MLAG interface to associate with the policy.
Figure 11. Add MLAG Interface

Click Add 1 Interface and enter the following to configure the policy association.

  • Name: Assign a unique name to the policy.
  • Description: A description for the policy.
  • Action: Forward (default), None, Capture, or Drop.
  • Priority: 100 (default), or enter a value.
  • Scheduling: Automatic (default), Now, Set Time, or Set Delay.
  • Port Selection > Traffic Sources > + Add Port(s): Select the filter interface (traffic source) for the policy.
  • Match Traffic > Allow All Traffic / Deny All Traffic or Configure A Rule: Specify the traffic rule for the policy.

Click Create Policy.

Viewing Policy Statistics in the GUI

After configuring the MLAG policy, view it under Monitoring > Policies along with the dynamic policies created as part of the MLAG policy.
Figure 12. MLAG Policy
To view the policy statistics, roll over a Policy Name or click on the MLAG Policy Name. The following screens appear.
Figure 13. MLAG Policy - Interface Statistics
Figure 14. MLAG Policy - Operational Details

Viewing MLAG Links in the GUI

To view the MLAG links, go to Fabric > Links > MLAG Member Links tab.
Figure 15. MLAG Member Links

The above screenshot shows the MLAG links established between the core switch and the peer switches that are part of the MLAG domain. The LLDP message exchange discovers the links.

Using LAG Interfaces as Members in MLAG Interfaces

MLAG interface members can be physical interfaces or LAG interfaces to increase bandwidth. To add a LAG member to an MLAG interface, use the following procedure:

  1. Configure the LAG interface on Peer Switch 1.
    Controller-1(config)# switch DeliverySwitch-1
    Controller-1(config-switch)# lag-interface LAG-peer-switch-1
    Controller-1(config-switch-lag-if)# member ethernet11
    Controller-1(config-switch-lag-if)# member ethernet12
  2. Configure the LAG interface on Peer Switch 2.
    Controller-1(config)# switch DeliverySwitch-2
    Controller-1(config-switch)# lag-interface LAG-peer-switch-2
    Controller-1(config-switch-lag-if)# member ethernet11
    Controller-1(config-switch-lag-if)# member ethernet12
  3. Add the configured LAG interfaces as members into the MLAG interface.
    Controller-1(config)# mlag-domain Domain1
    Controller-1(config-mlag-domain)# mlag-interface MLAG-LAG-Del-Intf
    Controller-1(config-mlag-domain-if)# member switch DeliverySwitch-1 interface LAG-peer-switch-1
    Controller-1(config-mlag-domain-if)# member switch DeliverySwitch-2 interface LAG-peer-switch-2
    Controller-1(config-mlag-domain-if)# role delivery interface-name MLAG-LAG-Tool-1
  4. Configure the DMF policy using the delivery interface MLAG-LAG-Tool-1.
    Controller-1(config)# policy Policy-1
    Controller-1(config-policy)# action forward
    Controller-1(config-policy)# 1 match any
    Controller-1(config-policy)# filter-interface Filter-1
    Controller-1(config-policy)# delivery-interface MLAG-LAG-Tool-1
    Note: Traffic will not hash toward tools if the core switch LAG has the same number of member interfaces as the LAG on the peer switches of MLAG delivery.
    Workaround:
    • Ensure that the hash fields on the core switch and MLAG peer switches are different or:
    • Ensure that the number of member interfaces in the LAG interface configured on the core switch differs from the number of members in the LAG interface configured on peer switches.

Overlapping Policies in LAGs

An overlapping policy is dynamically configured if two configured policies share at least one filter interface and at least one of the delivery interfaces is different.

When two DMF policies are configured to use an MLAG interface as a delivery interface overlap, the following policies are created:
  1. Policy-1 uses the filter interface Filter-1 and the delivery interface MLAG-Tool-1.
  2. Policy-2 uses the filter interface Filter-1 and the delivery interface MLAG-Tool-2.
  3. The above two policies will result in an overlapping policy. An overlapping policy will be configured following the naming convention _Policy-1_o_Policy-2.
After calculating the overlapping policy for the two user-configured policies, DMF configures two dynamic policies: one for each peer switch in the MLAG domain and one for each of the three policies listed above.
Table 1. Dynamic Policies
MLAG Dynamic Policy Parent Policy Delivery Switch/Peer switch
_mlag_Policy-1_DeliverySwitch-1 Policy-1 DeliverySwitch-1
_mlag_Policy-1_DeliverySwitch-2 Policy-1 DeliverySwitch-2
_mlag_Policy-2_DeliverySwitch-1 Policy-2 DeliverySwitch-1
_mlag_Policy-2_DeliverySwitch-2 Policy-2 DeliverySwitch-2
_mlag Policy-1_o_Policy-2_DeliverySwitch-1 _Policy-1_o_Policy-2 DeliverySwitch-1
_mlag Policy-1_o_Policy-2_DeliverySwitch-2 _Policy-1_o_Policy-2 DeliverySwitch-2

The following policies, Policy-1 and Policy-2, share the same filter interface, Filter-1, but are configured to use different delivery interfaces, MLAG-Tool-1 and MLAG-Tool-2. No priority is configured; these policies use the same default priority.

Policy-1 Configuration
policy Policy-1
action forward
delivery-interface MLAG-Tool-1
filter-interface Filter-1
1 match ip src-ip 200.200.0.0 255.255.255.0
Policy-2 Configuration
policy Policy-2
action forward
delivery-interface MLAG-Tool-2
filter-interface Filter-1
1 match ip dst-ip 100.100.0.0 255.255.255.0
The above two policies will result in an overlapping policy.
Controller-1(config)# show policy
# Policy NameActionRuntime Status Type Priority Overlap Priy Push VLAN Filter BW Delivery BW Post Match Filt Traff Del Traffic Services
-|----------------------|-------|--------------|----------|--------|------------|---------|---------|-----------|----------------------|----------|--------|
1 Policy-1 forward installedConfigured1000 125Gbps80Gbps314Mbps315Mbps
2 Policy-2 forward installedConfigured1000 325Gbps80Gbps314Mbps315Mbps
3 _Policy-1_o_Policy-2 forward installedDynamic 1001 525Gbps80Gbps314Mbps315Mbps
  • Policy-1: User-configured policy to forward packets matching source IP 200.200.0.0/24 to MLAG-Tool-1.
  • Policy-2: User-configured policy to forward packets matching destination IP 100.100.0.0/24 to MLAG-Tool-2.
  • _Policy-1_o_Policy-2: A dynamically configured overlapping policy with higher Overlap Priority to ensure that if a packet matches rules from both the policies (source IP of 200.200.0.1 and destination IP of 100.100.0.1), it forwards to both MLAG-Tool-1 and MLAG-Tool-2.

The following are the dynamic policies configured for each delivery switch in the MLAG Domain.

Policy-1 Dynamic Policies
  • _mlag_Policy-1_DeliverySwitch-1: MLAG dynamic policy for Policy-1 for DeliverySwitch-1.
  • _mlag_Policy-1_DeliverySwitch-2: MLAG dynamic policy for Policy-1 for DeliverySwitch-2.
Policy-2 Dynamic Policies
  • _mlag_Policy-2_DeliverySwitch-1: MLAG dynamic policy for Policy-2 for DeliverySwitch-1.
  • _mlag_Policy-2_DeliverySwitch-2: MLAG dynamic policy for Policy-2 for DeliverySwitch-2.

_Policy-1_o_Policy-2 Dynamic Policies

The following policies have higher Overlap Priority than the rest to prioritize the overlapping traffic forwarded to DeliverySwitch-1 and DeliverySwitch-2.
  • _mlag Policy-1_o_Policy-2_DeliverySwitch-1: MLAG dynamic policy for overlapping policy for DeliverySwitch-1.
  • _mlag Policy-1_o_Policy-2_DeliverySwitch-2: MLAG dynamic policy for overlapping policy for DeliverySwitch-2.
..
Contact Us
Arista
Facebook Twitter LinkedIn
  • Support
    • Support & Services
    • Training
    • Product Documentation
    • Software Downloads
  • Contacts & Help
    • Contact Arista
    • Contact Technical Support
    • Order Status
  • News
    • News Room
    • Events Calendar
    • Blogs
  • About Arista
    • Company
    • Management Team
    • Careers
    • Investor Relations
  • Terms of Use
  • Privacy Policy
  • Fraud Alert
  • Sitemap