As of EOS 4.22.0F, EVPN all active multihoming is supported as a standardized redundancy solution.  Redundancy

In a VXLAN routing setup using VXLAN Controller Service (VCS), this feature will enable the following on a switch that is running as a VCS client.

This is achieved by using the next-hop of the static route as the peer IP address for the BFD session. The static route is either installed or removed based on the status of the underlying BFD session. A static route whose next-hop is configured to be tracked by BFD is referred to as a ‘BFD tracked static route’ in the context of this document. This feature is supported for both IPv4 and IPv6 static routes.

This feature introduces a new CLI command which disables the above-mentioned propagation of DSCP and ECN bits from the outer IP header. 

By default, the DSCP and ECN bits of VXLAN bridged packets are not rewritten. Currently, for bridged packets undergoing VXLAN encapsulation, the DSCP in the outer IP header is derived from TC and the ECN bits are set to zero. The desired behavior is that the outer IP header should be remarked with ingress packet DSCP and ingress packet ECN. Also, local congestion should be handled correctly.

This feature allows a Data Center (DC) operator to incrementally migrate their VXLAN network from IPv4 to IPv6

Ipv6 EVPN VXLAN 4.26.0F

As described in the Multi-VTEP MLAG TOI, singly connected hosts can lead to suboptimal peer-link utilization. By adding a local VTEP to each MLAG peer, the control plane is able to advertise singly connected hosts as being directly behind a specific local VTEP / MLAG peer.

This feature enables support for Macro Segmentation Service (MSS) to insert security devices into the traffic path

EVPN VXLAN MSS 4.24.2F

This new feature explains the use of the BGP Domain PATH (D-PATH) attribute that can be used to identify the EVPN domain(s) through which the EVPN MAC-IP routes have passed. EOS DCI Gateway provides new mechanisms for users to specify the EVPN Domain Identifier for its local and remote domains.   DCI Gateways sharing the same redundancy group should share the same local domain identifier and same remote domain identifier.

Ethernet VPN (EVPN) is an extension of the BGP protocol introducing a new address family: L2VPN (address family

In the traditional data center design, inter-subnet forwarding is provided by a centralized router, where traffic traverses across the network to a centralized routing node and back again to its final destination. In a large multi-tenant data center environment this operational model can lead to inefficient use of bandwidth and sub-optimal forwarding.

This feature adds control plane support for inter subnet forwarding between EVPN and IPVPN networks. It also

This feature adds control plane support for inter subnet forwarding between EVPN networks. This support is achieved

EVPN MPLS VXLAN 4.25.0F

“MLAG Domain Shared Router MAC” is a new mechanism to introduce a new router MAC to be used for MLAG TOR

EVPN VXLAN 4.21.3F

As described in the L3 EVPN VXLAN Configuration Guide, it is common practice to use Layer 3 EVPN to provide multi

In EOS 4.22.0F, EVPN VXLAN all active multi homing L2 support is available. A customer edge (CE) device can connect to

Ethernet VPN (EVPN) networks normally require some measure of redundancy to reduce or eliminate the impact of outages and maintenance. RFC7432 describes four types of route to be exchanged through EVPN, with a built-in multihoming mechanism for redundancy. Prior to EOS 4.22.0F, MLAG was available as a redundancy option for EVPN with VXLAN, but not multihoming. EVPN multihoming is a multi-vendor standards-based redundancy solution that does not require a dedicated peer link and allows for more flexible configurations than MLAG, supporting peering on a per interface level rather than a per device level. It also supports a mass withdrawal mechanism to minimize traffic loss when a link goes down.

EVPN gateway support for all-active (A-A) multihoming adds a new redundancy model to our multi-domain EVPN solution introduced in [1]. This deployment model introduces the concept of a WAN Interconnect Ethernet Segment identifier (WAN I-ESI). The WAN I-ESI allows the gateway’s EVPN neighbors to form L2 and L3 overlay ECMP on routes re-exported by the gateways. The identifier is shared by gateway nodes within the same domain (site) and set in MAC-IP routes that cross domain boundaries.

This feature enables support for an EVPN VxLAN control plane in conjunction with Arista’s OpenStack ML2 plugin for

Starting with EOS release 4.22.0F, the EVPN VXLAN L3 Gateway using EVPN IRB supports routing traffic from one IPV6

Starting with EOS release 4.22.0F, the EVPN VXLAN L3 Gateway using EVPN IRB supports routing traffic from IPV6 host to

Starting with EOS release 4.22.0F, the EVPN VXLAN L3 Gateway using EVPN IRB supports routing traffic from one IPV6

In a traditional EVPN VXLAN centralized anycast gateway deployment, multiple L3 VTEPs serve the role of the

Typical WiFi networks utilize a single, central Wireless LAN Controller (WLC) to act as a gateway between the wireless APs and the wired network. Arista differentiates itself by allowing the wireless network to utilize a distributed set of aggregation switches to connect APs to the wired network. This feature allows a decentralized and distributed set of aggregation switches to bridge wireless traffic on behalf of the set of APs configured to VXLAN tunnel all traffic to those aggregation switches, or their “local” APs.

There are use cases where all broadcast, multicast  and unknown MAC traffic are not needed to be flooded into the

In VXLAN networks, broadcast DHCP requests are head-end-replicated to all VXLAN tunnel endpoints (VTEP). If a DHCP relay helper address is configured on more than one VTEP, each such VTEP relays the DHCP request to the configured DHCP server. This could potentially overwhelm the DHCP server as it would receive multiple copies of broadcast packets originated from a host connected to one of the VTEPs.

Each ARP/ND packet into a switch may generate an update for the switch ARP/Neighbor table and this update may need to be synchronized with the MLAG peer when VXLAN is configured. Prior to this feature, these updates (on a VXLAN setup) are synchronized by sending an UDP packet (one packet per update) containing the IP/MAC/VLAN information from the MLAG peer where the ARP/ND packet is received to the other MLAG peer. 

4.22.1F introduces support for ip address virtual for PIM and IGMP in MLAG and Vxlan. On a VLAN, the same IP address can

This solution allows delivery of IPv6 multicast traffic in an IP-VRF using an IPv4 multicast in the underlay network. The protocol used to build multicast trees in the underlay network is PIM Sparse Mode.

This solution allows delivery of both IPv4 and IPv6 multicast traffic in an IP-VRF using an IPv6 multicast in the underlay network. The protocol used to build multicast trees in the underlay network is IPv6 PIM-SSM.

Several customers have expressed interest in using IPv6 addresses for VXLAN underlay in their Data Centers (DC). Prior to 4.24.1F, EOS only supported IPv4 addresses for VXLAN underlay, i.e., VTEPs were reachable via IPv4 addresses only.

As of EOS 4.22.0F, EVPN all active multihoming is supported as a standardized redundancy solution. For effective

EVPN VXLAN IAR L2 4.26.1F

Ethernet VPN (EVPN) is an extension of the BGP protocol introducing a new address family: L2VPN (address family number 25) / EVPN (subsequent address family number 70). It is used to exchange overlay MAC and IP address reachability information between BGP peers using type-2 routes, but additionally,  EVPN supports the exchange of layer 3 IPv4 and IPv6 overlay routes through the extensions described in (type 5 EVPN routes).

For traffic mirroring, Arista switches support several types of mirroring destinations. This document describes a new type of mirroring destination in which mirrored traffic is tunneled over VXLAN as the inner packet to a remote VTEP. This feature is useful for when the traffic analyzer is a VTEP reachable over a VXLAN tunnel.

This solution optimizes the delivery of multicast to a VLAN over an Ethernet VPN (EVPN) network. Without this solution IPv6 multicast traffic in a VLAN is flooded to all Provider Edge(PE) devices which contain the VLAN.

This feature provides the ability to interconnect EVPN VXLAN domains. Domains may or may not be within the same data

EVPN VXLAN 4.26.1F

This feature extends the multi-domain EVPN VXLAN feature introduced to support interconnect with EVPN MPLS networks. The following diagram shows a multi-domain deployment with EVPN VXLAN in the data center and EVPN MPLS in the WAN. Note that this is the only supported deployment model, and that an EVPN MPLS network cannot peer with an EVPN MPLS network.

In conventional VXLAN deployments, each MLAG pair of switches are represented as a common logical VTEP. VXLAN traffic can be decapsulated on either switch. In some networks, there are hosts that are singly connected to one of the MLAG pair.

[L2 EVPN] and  [Multicast EVPN IRB] solutions allow for the delivery of customer BUM (Broadcast, Unknown unicast

This solution allows delivery of multicast traffic in an IP VRF using multicast in the underlay network. It builds on

Multicast EVPN IRB solution allows for the delivery of customer BUM (Broadcast, Unknown unicast and Multicast) traffic in L3VPNs using multicast in the underlay network. This document contains only partial information that is new or different for the Multicast EVPN Multiple Underlay Groups solution.

[L2 EVPN] and  [Multicast EVPN IRB] solutions allow for the delivery of customer BUM (Broadcast, Unknown unicast and Multicast) traffic in a L2VPN and L3VPNs respectively using multicast in the underlay network.

This feature adds all-active (A-A) multihoming support on the multi-domain EVPN VXLAN-MPLS gateway. It allows L2 and L3 ECMP to form between the multihoming gateways on the TOR devices inside the site and on the gateways in the remote sites. Therefore, traffic can be load-balanced to the multi-homing gateway and redundancy and fast convergence can be achieved.

This feature allows the packets to be VxLAN encapsulated after NAT translation, Reverse NAT translation applied on VxLAN tunnel terminated packets

By default, when an SVI is configured on a VXLAN VLAN, then broadcast, unknown unicast, and unknown multicast (BUM) traffic received from the tunnel are punted to CPU. However, sending unknown unicast and unknown multicast traffic to CPU is unnecessary and could have negative side effects. Specifically, these packets take the L2Broadcast CoPP queue to the CPU. 

Overlay IPv6 routing over VXLAN Tunnel is simply routing IPv6 packets in and out of VXLAN Tunnels, similar to

EOS currently supports VXLAN L2 integration with external controllers using the Arista OVSDB HW VTEP schema ([HW

VXLAN 4.18.0F OVSDB

Selective ARP install is the selective programming of remote ARPs in hardware as received through EVPN Type 2 MAC-IP routes in an EVPN VXLAN/ MPLS Integrated Routing and Bridging (IRB) scenario. Instead of installing every MAC+IP binding received from EVPN into the hardware, the switch installs them only when there is routed traffic destined to the IP, thereby saving TCAM space on the switch. However, there is a tradeoff as there is an initial one-time latency to install the hardware TCAM entry on the first flow of routed traffic to the IP.

Enabling “Proxy ARP/ND for Single Aggregation (AG) VTEP Campus Deployments without EVPN” allows an aggregation VTEP to proxy reply to a VXLAN-encapsulated ARP request/NS when the ARP/NS target host is remote and the ARP/ND binding is already learned by the AG VTEP.

Private VLAN is a feature that segregates a regular VLAN broadcast domain while maintaining all ports in the same IP