As of EOS 4.22.0F, EVPN all active multihoming is supported as a standardized redundancy solution.  Redundancy

In a VXLAN routing setup using VXLAN Controller Service (VCS), this feature will enable the following on a switch that is running as a VCS client.

In a VXLAN routing setup using VXLAN Controller Service (VCS), this feature will enable the following on a switch that is running as a VCS client.

This feature introduces a new CLI command which disables the above-mentioned propagation of DSCP and ECN bits from the outer IP header. 

This feature allows a Data Center (DC) operator to incrementally migrate their VXLAN network from IPv4 to IPv6

Ipv6 EVPN VXLAN 4.26.0F

As described in the Multi VTEP MLAG TOI, singly connected hosts can lead to suboptimal peer link utilisation. By

This feature enables support for Macro Segmentation Service (MSS) to insert security devices into the traffic path

EVPN VXLAN MSS 4.24.2F

Ethernet VPN (EVPN) is an extension of the BGP protocol introducing a new address family: L2VPN (address family

This feature adds control plane support for inter subnet forwarding between EVPN and IPVPN networks. It also

This feature adds control plane support for inter subnet forwarding between EVPN networks. This support is achieved

EVPN MPLS VXLAN 4.25.0F

“MLAG Domain Shared Router MAC” is a new mechanism to introduce a new router MAC to be used for MLAG TOR

EVPN VXLAN 4.21.3F

As described in the L3 EVPN VXLAN Configuration Guide, it is common practice to use Layer 3 EVPN to provide multi

VRF EVPN VXLAN 4.24.0F

In EOS 4.22.0F, EVPN VXLAN all active multi homing L2 support is available. A customer edge (CE) device can connect to

Ethernet VPN (EVPN) networks normally require some measure of redundancy to reduce or eliminate the impact of outages and maintenance. RFC7432 describes four types of route to be exchanged through EVPN, with a built-in multihoming mechanism for redundancy. Prior to EOS 4.22.0F, MLAG was available as a redundancy option for EVPN with VXLAN, but not multihoming. EVPN multihoming is a multi-vendor standards-based redundancy solution that does not require a dedicated peer link and allows for more flexible configurations than MLAG, supporting peering on a per interface level rather than a per device level. It also supports a mass withdrawal mechanism to minimize traffic loss when a link goes down.

This feature enables support for an EVPN VxLAN control plane in conjunction with Arista’s OpenStack ML2 plugin for

Starting with EOS release 4.22.0F, the EVPN VXLAN L3 Gateway using EVPN IRB supports routing traffic from one IPV6

Starting with EOS release 4.22.0F, the EVPN VXLAN L3 Gateway using EVPN IRB supports routing traffic from IPV6 host to

Starting with EOS release 4.22.0F, the EVPN VXLAN L3 Gateway using EVPN IRB supports routing traffic from one IPV6

In a traditional EVPN VXLAN centralized anycast gateway deployment, multiple L3 VTEPs serve the role of the

Typical Wi Fi networks utilize a single, central Wireless LAN Controller (WLC) to act as a gateway between the

In VXLAN networks, broadcast DHCP requests are head-end-replicated to all VXLAN tunnel endpoints (VTEP). If a DHCP relay helper address is configured on more than one VTEP, each such VTEP relays the DHCP request to the configured DHCP server. This could potentially overwhelm the DHCP server as it would receive multiple copies of broadcast packets originated from a host connected to one of the VTEPs.

4.22.1F introduces support for ip address virtual for PIM and IGMP in MLAG and Vxlan. On a VLAN, the same IP address can

Several customers have expressed interest in using IPv6 addresses for VXLAN underlay in their Data Centers (DC). Prior to 4.24.1F, EOS only supported IPv4 addresses for VXLAN underlay, i.e., VTEPs were reachable via IPv4 addresses only.

As of EOS 4.22.0F, EVPN all active multihoming is supported as a standardized redundancy solution. For effective

EVPN VXLAN IAR L2 4.26.1F

This feature provides the ability to interconnect EVPN VXLAN domains. Domains may or may not be within the same data

EVPN VXLAN 4.26.1F

This feature extends the multi-domain EVPN VXLAN feature introduced to support interconnect with EVPN MPLS networks. The following diagram shows a multi-domain deployment with EVPN VXLAN in the data center and EVPN MPLS in the WAN. Note that this is the only supported deployment model, and that an EVPN MPLS network cannot peer with an EVPN MPLS network.

[L2 EVPN] and  [Multicast EVPN IRB] solutions allow for the delivery of customer BUM (Broadcast, Unknown unicast

This solution allows delivery of multicast traffic in an IP VRF using multicast in the underlay network. It builds on

Multicast EVPN IRB solution allows for the delivery of customer BUM (Broadcast, Unknown unicast and Multicast) traffic in L3VPNs using multicast in the underlay network. This document contains only partial information that is new or different for the Multicast EVPN Multiple Underlay Groups solution.

By default, when an SVI is configured on a VXLAN VLAN, then broadcast, unknown unicast, and unknown multicast (BUM) traffic received from the tunnel are punted to CPU. However, sending unknown unicast and unknown multicast traffic to CPU is unnecessary and could have negative side effects. Specifically, these packets take the L2Broadcast CoPP queue to the CPU. 

Overlay IPv6 routing over VXLAN Tunnel is simply routing IPv6 packets in and out of VXLAN Tunnels, similar to

EOS currently supports VXLAN L2 integration with external controllers using the Arista OVSDB HW VTEP schema ([HW

VXLAN 4.18.0F OVSDB

Private VLAN is a feature that segregates a regular VLAN broadcast domain while maintaining all ports in the same IP

This feature introduces support for ACL configuration on VXLAN decapsulated packets. The configured ACL rules will

VXLAN RACL 4.24.0F

VXLAN UDP ESP support allows the customer to encrypt traffic between two VXLAN VTEPs. The frame

Prior to 4.25.2F, support for BGP PIC was restricted to locally identifiable failures such as link failures. If a

Overlay IPv6 routing over VXLAN tunnel using an anycast gateway (direct routing) has been previously supported using the “ipv6 virtual-router” configuration for both the data-plane and EVPN (or CVX) control-plane learning environments. 

Several customers have expressed interest in using IPv6 addresses for VxLAN underlay in their Data Centers (DC). Prior to 4.27.2F, only IPv4 addresses are supported for VxLAN underlay, i.e VTEPs are reachable via IPv4 addresses only. This feature enables a VTEP to send VxLAN Encapsulated packets using IPv6 underlay.

This feature expands Multi Domain EVPN VXLAN to support an Anycast Gateway model as the mechanism for gateway

This feature enables support for migrating from only using VCS as the control plane to only using EVPN as a control

VCS EVPN VXLAN 4.23.1F

VXLAN flood lists are typically configured via CLI or learned via control plane sources such as EVPN. The

Current VXLAN decapsulation logic requires the following hits on affected switches listed in the following

VRF VXLAN 4.26.2F

This feature allows selecting Differentiated Services Code Point (DSCP) and Traffic Class (TC) values for packets

VXLAN DSCP 4.26.0F

In EOS 4.18.0F, VXLAN direct routing was introduced on the 7500R and 7280E/R series platforms. VXLAN routing

Configuration of VXLAN overlay using EVPN allows for extension of Layer 2 (L2) or Layer 3 (L3) networks across

VXLAN 4.21.3F

The VxLAN VTEP and VNI counters feature allows the device to count VxLAN packets received and sent by the device on a per

The VxLAN VTEP counters feature allows the device to count VxLAN packets received and sent by the device on a per

The VxLAN VTEP counters feature allows the device to count VxLAN packets received and sent by the device on a per VTEP

The VxLAN VTEP counters feature allows the device to count VxLAN packets received and sent by the device on a per VTEP

The “vxlan bridging vtep to vtep” feature allows VXLAN encapsulated packets ingressed at an Arista switch from a