As of EOS 4.22.0F, EVPN all active multihoming is supported as a standardized redundancy solution.  Redundancy

In a VXLAN routing setup using VXLAN Controller Service (VCS), this feature will enable the following on a switch that is running as a VCS client.

This is achieved by using the next-hop of the static route as the peer IP address for the BFD session. The static route is either installed or removed based on the status of the underlying BFD session. A static route whose next-hop is configured to be tracked by BFD is referred to as a ‘BFD tracked static route’ in the context of this document. This feature is supported for both IPv4 and IPv6 static routes.

This feature introduces a new CLI command which disables the above-mentioned propagation of DSCP and ECN bits from the outer IP header. 

By default, the DSCP and ECN bits of VXLAN bridged packets are not rewritten. Currently, for bridged packets undergoing VXLAN encapsulation, the DSCP in the outer IP header is derived from TC and the ECN bits are set to zero. The desired behavior is that the outer IP header should be remarked with ingress packet DSCP and ingress packet ECN. Also, local congestion should be handled correctly.

This feature allows a Data Center (DC) operator to incrementally migrate their VXLAN network from IPv4 to IPv6

Ipv6 EVPN VXLAN 4.26.0F

As described in the Multi VTEP MLAG TOI, singly connected hosts can lead to suboptimal peer link utilisation. By

This feature enables support for Macro Segmentation Service (MSS) to insert security devices into the traffic path

EVPN VXLAN MSS 4.24.2F

Ethernet VPN (EVPN) is an extension of the BGP protocol introducing a new address family: L2VPN (address family

This feature adds control plane support for inter subnet forwarding between EVPN and IPVPN networks. It also

This feature adds control plane support for inter subnet forwarding between EVPN networks. This support is achieved

EVPN MPLS VXLAN 4.25.0F

“MLAG Domain Shared Router MAC” is a new mechanism to introduce a new router MAC to be used for MLAG TOR

EVPN VXLAN 4.21.3F

As described in the L3 EVPN VXLAN Configuration Guide, it is common practice to use Layer 3 EVPN to provide multi

In EOS 4.22.0F, EVPN VXLAN all active multi homing L2 support is available. A customer edge (CE) device can connect to

Ethernet VPN (EVPN) networks normally require some measure of redundancy to reduce or eliminate the impact of outages and maintenance. RFC7432 describes four types of route to be exchanged through EVPN, with a built-in multihoming mechanism for redundancy. Prior to EOS 4.22.0F, MLAG was available as a redundancy option for EVPN with VXLAN, but not multihoming. EVPN multihoming is a multi-vendor standards-based redundancy solution that does not require a dedicated peer link and allows for more flexible configurations than MLAG, supporting peering on a per interface level rather than a per device level. It also supports a mass withdrawal mechanism to minimize traffic loss when a link goes down.

This feature enables support for an EVPN VxLAN control plane in conjunction with Arista’s OpenStack ML2 plugin for

Starting with EOS release 4.22.0F, the EVPN VXLAN L3 Gateway using EVPN IRB supports routing traffic from one IPV6

Starting with EOS release 4.22.0F, the EVPN VXLAN L3 Gateway using EVPN IRB supports routing traffic from IPV6 host to

Starting with EOS release 4.22.0F, the EVPN VXLAN L3 Gateway using EVPN IRB supports routing traffic from one IPV6

In a traditional EVPN VXLAN centralized anycast gateway deployment, multiple L3 VTEPs serve the role of the

Typical Wi Fi networks utilize a single, central Wireless LAN Controller (WLC) to act as a gateway between the

In VXLAN networks, broadcast DHCP requests are head-end-replicated to all VXLAN tunnel endpoints (VTEP). If a DHCP relay helper address is configured on more than one VTEP, each such VTEP relays the DHCP request to the configured DHCP server. This could potentially overwhelm the DHCP server as it would receive multiple copies of broadcast packets originated from a host connected to one of the VTEPs.

4.22.1F introduces support for ip address virtual for PIM and IGMP in MLAG and Vxlan. On a VLAN, the same IP address can

This solution allows delivery of IPv6 multicast traffic in an IP-VRF using an IPv4 multicast in the underlay network. The protocol used to build multicast trees in the underlay network is PIM Sparse Mode.

Several customers have expressed interest in using IPv6 addresses for VXLAN underlay in their Data Centers (DC). Prior to 4.24.1F, EOS only supported IPv4 addresses for VXLAN underlay, i.e., VTEPs were reachable via IPv4 addresses only.

As of EOS 4.22.0F, EVPN all active multihoming is supported as a standardized redundancy solution. For effective

EVPN VXLAN IAR L2 4.26.1F

For traffic mirroring, Arista switches support several types of mirroring destinations. This document describes a new type of mirroring destination in which mirrored traffic is tunneled over VXLAN as the inner packet to a remote VTEP. This feature is useful for when the traffic analyzer is a VTEP reachable over a VXLAN tunnel.

This solution optimizes the delivery of multicast to a VLAN over an Ethernet VPN (EVPN) network. Without this solution IPv6 multicast traffic in a VLAN is flooded to all Provider Edge(PE) devices which contain the VLAN.

This feature provides the ability to interconnect EVPN VXLAN domains. Domains may or may not be within the same data

EVPN VXLAN 4.26.1F

This feature extends the multi-domain EVPN VXLAN feature introduced to support interconnect with EVPN MPLS networks. The following diagram shows a multi-domain deployment with EVPN VXLAN in the data center and EVPN MPLS in the WAN. Note that this is the only supported deployment model, and that an EVPN MPLS network cannot peer with an EVPN MPLS network.

[L2 EVPN] and  [Multicast EVPN IRB] solutions allow for the delivery of customer BUM (Broadcast, Unknown unicast

This solution allows delivery of multicast traffic in an IP VRF using multicast in the underlay network. It builds on

Multicast EVPN IRB solution allows for the delivery of customer BUM (Broadcast, Unknown unicast and Multicast) traffic in L3VPNs using multicast in the underlay network. This document contains only partial information that is new or different for the Multicast EVPN Multiple Underlay Groups solution.

This feature allows the packets to be VxLAN encapsulated after NAT translation, Reverse NAT translation applied on VxLAN tunnel terminated packets

By default, when an SVI is configured on a VXLAN VLAN, then broadcast, unknown unicast, and unknown multicast (BUM) traffic received from the tunnel are punted to CPU. However, sending unknown unicast and unknown multicast traffic to CPU is unnecessary and could have negative side effects. Specifically, these packets take the L2Broadcast CoPP queue to the CPU. 

Overlay IPv6 routing over VXLAN Tunnel is simply routing IPv6 packets in and out of VXLAN Tunnels, similar to

EOS currently supports VXLAN L2 integration with external controllers using the Arista OVSDB HW VTEP schema ([HW

VXLAN 4.18.0F OVSDB

Enabling “Proxy ARP/ND for Single Aggregation (AG) VTEP Campus Deployments without EVPN” allows an aggregation VTEP to proxy reply to a VXLAN-encapsulated ARP request/NS when the ARP/NS target host is remote and the ARP/ND binding is already learned by the AG VTEP.

Private VLAN is a feature that segregates a regular VLAN broadcast domain while maintaining all ports in the same IP

This feature introduces support for ACL configuration on VXLAN decapsulated packets. The configured ACL rules will

VXLAN UDP ESP support allows the customer to encrypt traffic between two VXLAN VTEPs. The frame

Prior to 4.25.2F, support for BGP PIC was restricted to locally identifiable failures such as link failures. If a

Prior to this feature, we supported a maximum of two levels of Forward Equivalence Class (FEC) hierarchies for vxlan routing tunnels in hardware.

Overlay IPv6 routing over VXLAN tunnel using an anycast gateway (direct routing) has been previously supported using the “ipv6 virtual-router” configuration for both the data-plane and EVPN (or CVX) control-plane learning environments. 

Several customers have expressed interest in using IPv6 addresses for VxLAN underlay in their Data Centers (DC). Prior to 4.27.2F, only IPv4 addresses are supported for VxLAN underlay, i.e VTEPs are reachable via IPv4 addresses only. This feature enables a VTEP to send VxLAN Encapsulated packets using IPv6 underlay.

This feature expands Multi Domain EVPN VXLAN to support an Anycast Gateway model as the mechanism for gateway

This feature enables support for migrating from only using VCS as the control plane to only using EVPN as a control

VCS EVPN VXLAN 4.23.1F

VXLAN flood lists are typically configured via CLI or learned via control plane sources such as EVPN. The

Current VXLAN decapsulation logic requires the following hits on affected switches listed in the following

VRF VXLAN 4.26.2F

This feature allows selecting Differentiated Services Code Point (DSCP) and Traffic Class (TC) values for packets at VTEPs along VXLAN encapsulation and decapsulation directions respectively. DSCP is a field in IP Header and TC is a tag associated with a packet within the switch, both influence the Quality of Service the packet receives. This feature can be enabled via configuration as explained later in this document.