This feature allows the Arista switch to act as the tunnel head for an MPLS tunnel and is exposed through two

Generic UDP Encapsulation (GUE) is a general method for encapsulating packets of arbitrary IP protocols within a UDP tunnel. GUE provides an extensible header format with optional data. In this release, the ability to encapsulate MPLS over GUE packets of variant 1 header format has been added. 

MRU (maximum receive unit) enforcement provides the ability to drop frames that exceed a configured threshold on the ingress interface.

The TCP MSS clamping feature involves clamping the maximum segment size (MSS) in the TCP header of TCP SYN packets if it exceeds the configured MSS ceiling limit for the interface. Clamping MSS value helps in avoiding IP fragmentation in tunnel scenarios by ensuring that MSS is small enough to accommodate the extra overhead of GRE and tunnel outer IP headers.

While migrating from PVST to MSTP, or vice verse, the network engineer may choose not to run MSTP throughout the

EOS supported two routing protocol implementations: multi-agent and ribd. The ribd routing protocol model is removed starting from the EOS-4.32.0F release. Multi-agent will be the only routing protocol model. Both models largely work the same way though there are subtle differences.

This feature provides the ability to interconnect EVPN VXLAN domains. Domains may or may not be within the same data center network, and the decision to stretch/interconnect a subnet between domains is configurable. The following diagram shows a multi-domain deployment using symmetric IRB. Note that two domains are shown for simplicity, but this solution supports any number of domains.

Multi hop BFD  allows for liveness detection between systems whose path may consist of multiple hops. With an

TOI 4.20.1F

Until now, a multi-band client (for example, a phone with 2.4, 5, and 6 GHz radios) could connect to an Access Point (AP) using only one of the bands. Therefore, only one connection link is formed between the client and the AP. Multi-link Operation (MLO) is the capability of the client and the AP to connect to more than one band simultaneously, thereby establishing multiple links. Clients that can connect to the Access Point over multiple radio links simultaneously are called Multi-Link Devices (MLD).

Until now, a multi-band client (for example, a phone with 2.4, 5, and 6 GHz radios) could connect to an AP using only one of the bands. Therefore, only one connection link formed between the client and the AP. Multi-link Operation (MLO) is the capability of the client and the AP to connect to more than one band simultaneously establishing multiple links. The clients that are capable of communicating with each other over multiple radio links at the same time are called Multi-Link Devices (MLD).

This feature adds the support for OSPFv3 multi-site domains (currently this feature is added for IPv6 address family only) described in RFC6565 (OSPFv3 as a Provider to Customer Edge Protocol for BGP/MPLS IP Virtual Private Networks (VPNs) ) and enables routes BGP VPN routes to retain their original route type if they are in the same OSPFv3 domain. Two sites are considered to be in the same OSPFv3 domain if it is intended that routes from one site to the other be considered intra-network routes.

In conventional VXLAN deployments, each MLAG pair of switches are represented as a common logical VTEP. VXLAN traffic can be decapsulated on either switch. In some networks, there are hosts that are singly connected to one of the MLAG pair. VXLAN packets destined for the singly connected host could land on the other MLAG peer and subsequently be forwarded over the MLAG peer-link to reach the destination host. This path is undesirable since it would use up some bandwidth on the peer-link.

MultiAccess is an FPGA-based feature available on certain Arista 7130 platforms. It performs low-latency Ethernet multiplexing with optional packet contention queuing, storm control, VLAN tunneling, and packet access control. The interface to interface latency is a function of the selected MultiAccess profile, front panel interfaces, MultiAccess interfaces, configuration settings, and platform being used.

This solution allows delivery of multicast traffic in an IP-VRF using multicast in the underlay network. It builds on top of [L2-EVPN], adding support for L3 VPNs and Integrated Routing and Bridging (IRB).  The protocol used to build multicast trees in the underlay network is PIM Sparse Mode.

Evpn multicast IRB allows multicast traffic from the external Pim domain to flow through the EVPN network via PIM EVPN Gateway Designated Router (PEG-DR). The solution won’t work when the external Pim source or RP is not connected to PEG-DR in the external Pim domain. EVPN Multicast Transit solves the issue by allowing any PEG with transit configured (PEG-Transit) to act as PEG-DR.

The solution described in this document allows multicast traffic arriving on a VRF interface on a Provider’s Edge (PE) router to be delivered to Customer’s Edge (CE) routers with downstream receivers in the same VPN.

Multicast Only Fast Reroute (MoFRR) is a feature based on PIM sparse mode (PIM SM) protocol to minimize packet loss in a

TOI 4.20.1F

LANZ adds support for monitoring congestion on backplane (or fabric) ports on DCS 7304, DCS 7308, DCS 7316, DCS

This feature adds all-active (A-A) multihoming support on the multi-domain EVPN VXLAN-MPLS gateway. It allows L2 and L3 ECMP to form between the multihoming gateways on the TOR devices inside the site and on the gateways in the remote sites. Therefore, traffic can be load-balanced to the multi-homing gateway and redundancy and fast convergence can be achieved.

In Tap Aggregation mode, an interface can be configured as tap or tool port. Tap ports are used to 'tap' the traffic and

Multiple VLAN Registration Protocol (MVRP) is a Layer 2 protocol. The protocol allows access points to propagate the VLAN created on CV-CUE to the connected Switches. The real-time propagation of configuration allows you the flexibility of configuring your wired and wireless network in one interface and distributing it to other active interfaces. You do not have to worry about managing and maintaining the configurations in all interfaces.

The NAT Application Gateway (ALG) feature allows FTP connections between client server to be translated using

NAT peer state synchronization feature provides redundancy and resiliency for dynamic NAT across a pair of devices in an attempt to mitigate the risk of single NAT device failure. The main motivation is that since the NAT state is shared between two switches, the failure of one switch can be tolerated since the other switch will retain the translations.

Non default VRF support is now available for Static unicast NAT. Twice NAT. Dynamic NAT. VRF support

While preserving the information from the previous version, the updated DMF Interfaces UI introduces a new layout, design, and enhanced functionalities for improved interface viewing and monitoring for easy troubleshooting.

The new Switches page provides a modernized overview of all switches configured in DMF. A header and tabulated layout allow observation of different aspects of installed switches and provisioning new switches while on the same dashboard.

CloudVision Cognitive Unified Edge (CV-CUE) 18.0 introduces the following new features and enhanced functionalities:

CloudVision Cognitive Unified Edge (CV-CUE) 19.0 introduces the following new features and enhanced functionalities:

CloudVision Cognitive Unified Edge (CV-CUE) 20.0 introduces the following new features and enhanced functionalities: CV-CUE introduces longer time intervals for the LED Blink operation. Prior to the 20.0 release, 5,15, and 30 minutes were the available time intervals. From this release onwards, 2,4,8,12, and 24-hour time intervals are added. The longer blink duration is useful in enterprise network deployments to locate far-placed Access Points.

Multi-link Operation (MLO) is the capability of the client and the AP to connect to more than one band simultaneously, thereby establishing multiple links. The clients that can connect to the Access Point over multiple radio links simultaneously are called Multi-Link Devices (MLD).

DMF version 8.8.0 introduces a redesigned workflow for Interface Groups in the DMF UI. An interface group is a collection of one or more filter or delivery interfaces, making it more convenient to create a policy. Users won't need to specify each individual interface to which the policy will apply.

DMF 8.7.0 introduces a redesigned Recorder Node configuration workflow, monitoring page, and query workflow. 

In the 13.0 release, CloudVision Cognitive Unified Edge (CV-CUE) adds a new report and also includes some enhancements to existing reports.

Nexthop Group backup-activation events are produced by forwarding agents. Nexthop Groups supports configuring the backup paths through EOS RPC APIs and CLI. Whenever the route or prefix starts pointing to configured backup paths, a backup-activation event will be logged into the event-monitor DB with nexthop-group name, accurate timestamp and other attributes. The event monitoring feature also supports filtering the events based on the nexthop-group name, version etc.

The nexthop group feature allows users to manually configure a set of tunnels. Nexthop group counters provide the ability to count packets and bytes associated with each tunnel nexthop, irrespective of the number of times it appears in one or more nexthop groups. In other words, if a nexthop group entry shares a tunnel resource with another entry, they will also share the same counter.

Nexthop Group Event Monitoring in the RPC layer on Arista switches allows for quick and filterable viewing of Nexthop Group events, i.e., addition or deletion or callbacks associated with hardware programming of Nexthop Groups configured through the EosSdkRpc agent.

Nexthop selection using GRE key allows for nexthop routing selection based on the GRE key of a GRE encapsulated IP

Nexthop group match in PBR policy enables the user to match incoming packets being routed to a specified nexthop group

TOI 4.17.0F PBR

An introduction to Nexthop-groups can be seen in the Nexthop-Group section of EOS. With this feature, IP packets matching a static Nexthop-Group route can be encapsulated with a GRE tunnel and forwarded.

NIM-1QC is a single port OCP 3.0 standard NIM card manufactured by Intel. The AWE-7230R-4TX-4S-F, AWE-5310-F, and AWE-7250R-16S-F, AWE-5510-F devices have 2 and 4 NIM (Network Interface Module) slots respectively. These devices now support NIM-1QC cards.

NIM-4S is a 4 port OCP 3.0 standard NIM card manufactured by Intel. The AWE-7230R-4TX-4S-F, AWE-5310-F, and AWE-7250R-16S-F, AWE-5510-F devices have 2 and 4 NIM (Network Interface Module) slots respectively. These devices now support NIM-4S cards.

Configuration of arbitrary combinations of speeds on subinterfaces is being restricted on 800G CMIS Arista transceivers. This feature restricts configuring only uniform sets of speeds on applicable transceivers. This affects Arista-branded 800G active optical transceivers.

Vmware NSX Controllers expect Hardware VTEPs to monitor the liveness of the Replication Service Node via BFD.  In

EOS 4.35.0F introduces support for Network Time Security (NTS), as defined in RFC8915. NTS provides modern cryptographic security for the client-server mode of the Network Time Protocol (NTP). It separates key establishment from time synchronization by using a TLS-based NTS Key Establishment (NTS-KE) protocol to negotiate symmetric keys and encrypted cookies. These cookies are included in subsequent NTP packets to enable stateless authentication by the server. NTS ensures that time synchronization data is received from a legitimate source and has not been modified in transit.

For an octal port such as a QSFPDD or OSFP, this feature renumbers the ports on a system to have 4 configurable

In some situations, packets received by an ASIC need to be redirected to the control plane: packets that have the destination address of the router or packets that need special handling from the CPU for example. The control plane cannot handle as many packets as the ASIC. A system that protects the control plane against DOS and prioritizes packets to send to the CPU is needed.  This is accomplished by CoPP (control-plane policing). CoPP is already functioning, however, the CPU queues are statically allocated to a specific feature. If a feature is not used, the CPU queue statically allocated to the feature is not used either. This is a loss of resources.

The EOS Event Manager feature provides the ability to specify a condition and an action to be carried out when that

With the 18.0 release, you can trigger the Auto-Channel Selection (ACS) Mode and Transmit Power Control (TPC) Mode for a radio on demand. In ACS Mode, the Access Point (AP) scans the network to select the best channel. In Auto-TPC Mode, the AP automatically adjusts its transmit power to minimize interference with neighboring Arista APs.

With the 15.0.1 release, CV-CUE extends the wired configuration and monitoring capabilities. You can now onboard switches (710P, 720XP, 720DP) to CV-CUE. You can also configure switches and manage switch-related settings directly from the UI.

 . These are the release notes and configuration guide for the OpenConfig feature available in the 4.20.1F EOS

TOI 4.20.1F