Previously, the maximum valid port channel ID was equal to the maximum number of port channels configurable on the

Measured boot is a tamper-detection mechanism that records a system's boot process. It calculates cryptographic hashes of system components and configurations, which are then securely stored in the Platform Configuration Registers (PCRs) of a Trusted Platform Module (TPM) chip. This process creates a secure "hash chain" of the boot sequence. After the system starts, the TPM Quote operation, along with the PCR extension records, can be used to verify the PCR values, confirming that the system components are unchanged and the software is trusted.

Measured boot is an anti-tamper mechanism. It calculates the cryptographic signatures for software system components and extends the signatures into the Trusted Platform Module (TPM) security chip. Upon startup, with the feature turned on, the Aboot bootloader and EOS calculate the hash of various system components and extend the hashes into the Platform Configuration Registers (PCRs), which is one of the resources of the Trusted Platform Module (TPM) security chip. The calculation and extension event is called the measured boot event, and the event is associated with a revision number to help the user identify changes to the event.

Media Access Control Security (MACSec) is an industry standard encryption mechanism to protect all traffic flowing

MetaMux is an FPGA-based feature available on Arista’s 7130 platforms. It performs ultra-low latency Ethernet packet multiplexing with or without packet contention queuing. The port to port latency is a function of the selected MetaMux profile, front panel ingress port, front panel egress port, FPGA connector ingress port, and platform being used.

MetaWatch is an FPGA-based feature available for Arista 7130 Series platforms. It provides precise timestamping of packets, aggregation and deep buffering for Ethernet links. Timestamp information and other metadata such as device and port identifiers are appended to the end of the packet as a trailer.

CloudVision provides support for microperimeter segmentation and enforcement as part of Arista’s Multi-Domain Segmentation Service (MSS) for Zero Trust Networking (ZTN).

ZTN works to reduce lateral movement into increasingly smaller areas where workloads are granularly identified and only approved connections are permitted.

Mirror on drop is a network visibility feature which allows monitoring of MPLS or IP flow drops occurring in the ingress pipeline. When such a drop is detected, it is sent to the control plane where it is processed and then sent to configured collectors. Additionally, CLI show commands provide general and detailed statistics and status.

This feature allows a user to configure a mirror session with subinterface sources from the CLI. This feature is only available with ingress mirroring (rx direction)

Port mirroring allows you to duplicate ethernet packets or frames on a source interface to send to a remote host, like DANZ Monitoring Fabric (DMF). The mirrored packets or frames can be sent via a SPAN interface dedicated for communication with the host or over an L2 Generic Routing Encapsulation (L2GRE) tunnel.

Arista switches provide several mirroring features. Filtered mirroring to CPU adds a special destination to the mirroring features that allows the mirrored traffic to be sent to the switch supervisor. The traffic can then be monitored and analyzed locally without the need of a remote port analyzer. Use case of this feature is for debugging and troubleshooting purposes.

DMF 8.7.0 introduces an updated dashboard for viewing sFlow drops. The DMF analytics Node (AN) displays reasons for dropped packets as a Mirror on Drop (MOD) drop Flow sFlow collector by analyzing overall drops and drops by flow.

In an MLAG setup, routing on a switch (MLAG peer) is possible using its own bridge/system MAC, VARP MAC or VRRP MAC. When a peer receives an IP packet with destination MAC set to one of the aforementioned MACs, the packet gets routed if the hardware has enough information to route the packet. Before introducing this feature, if the destination MAC is peer’s bridge MAC, the packet is L2 bridged on the peer-link and the routing takes place on the peer. This behavior to use the peer-link to bridge the L3 traffic to the peer is undesirable especially when the MLAG peers can route the packets themselves.

MLAG currently checks for basic MLAG configuration to be consistent (e.g. domain id) before formation with the peer.

When MLAG peer link goes down, the secondary peer assumes the primary peer is down/dead, and takes over the primary

Mlag TOI

In an MLAG setup, periodic TCP/UDP heartbeats are sent over peer link to ensure IP connectivity between peers. Prior

This feature allows users to configure L2 subinterfaces on MLAG interfaces. L2 subinterfaces are not supported on the MLAG peer-link.

The objective of Maintenance Mode on MLAG is to gracefully drain away the traffic (L2 and BGP) flowing through a switch

MLAG Smart System Upgrade (SSU) provides the ability to upgrade the EOS image of an MLAG switch with minimal traffic disruption.

MLAG will support the following features Bridging, Routing, STP, VARP

If an MLAG flaps on one peer, then we may have to remap the MAC addresses learned, such that the reachability is via the

On a MLAG chassis, MAC addresses learned on individual peers are synced and appropriate interfaces are mapped to these MAC addresses. In case of unexpected events like reloading of one of the peers in the MLAG chassis or flapping of one or more MLAG interfaces, some loss of traffic may be observed.

For packets sent and received on the front-panel interfaces, this feature allows creation of a profile to configure buffer reservations in the MMU (MMU = Memory Management Unit which manages how the on-chip packet buffers are organized).

For packets sent and received on the front-panel interfaces, this feature allows creation of a profile to configure buffer reservations in the MMU (MMU = Memory Management Unit which manages how the on-chip packet buffers are organized). The profile can contain configurations for ingress and egress. On the ingress, configuration is supported at both a port level as well as a priority-group level. 

The main objective of this feature is to prevent modular systems from being shut down due to insufficient power by powering off cards if there is not enough power in the system at card startup.

This feature allows the removal of a configurable number of leading bytes starting from the Ethernet layer of packets sent to a monitor session. A new per-monitor session CLI command is provided to configure this, up to a maximum of 90 bytes.

With the 17.0 release, you can view the Tunnel Status and Tunnel State of the standby VXLAN tunnel. Until now, you could only see the status of the tunnel being used. There was no way to know if your standby tunnel was reachable or not. With this release, you can view the Tunnel Status and the Tunnel State of your primary or secondary tunnel operating in the Standby Mode.

From the 4.29.2F release of EOS, proactive probing of servers is supported. Using this feature Arista switches can continuously probe configured servers to check their liveliness and use the information obtained from these probes while sending out requests to the servers.

The feature MP BGP Multicast provides a way to populate the MRIB (Multicast Routing Information Base). MRIB is an

TOI 4.20.1F

The intended purpose of this feature is to introduce a server streaming RPC. When a client subscribes to this RPC, they will receive a message anytime there is an update to the hardware programming state of an MPLS route or the Nexthop-Group to which it points to. Note that messages will only be streamed in this RPC callback for versioned MPLS routes that point to versioned nexthop-groups. Messages will not be streamed via this RPC for MPLS routes and Nexthop-Groups that don’t meet this criteria.

This feature allows users to preserve IP TTL and MPLS EXP (also known as TC) value on MPLS routers, as well as add a user-specified TTL/EXP value when pushing new MPLS labels in pipe mode. With the added pipe mode support, packets can traverse the network such that only the LSP ingress and egress nodes are visible to the end users and the MPLS core network can be hidden from the end user.

EOS 4.15.0F adds support for MPLS encapsulation of IP packets in EOS. The functionality is exposed through two

Multiprotocol Label Switching (MPLS) is a networking process that replaces complete network addresses with short

MPLS-over-GRE encapsulation support in EOS 4.17.0 enables tunneling IPv4 packets over MPLS over GRE tunnels. This feature leverages next-hop group support in EOS. With this feature, IPv4 routes may be resolved via MPLS-over-GRE next-hop group to be able to push one MPLS label and then GRE encapsulate the resulting labelled IPv4 packet before sending out of the egress interface.

This feature allows the Arista switch to act as the tunnel head for an MPLS tunnel and is exposed through two

Generic UDP Encapsulation (GUE) is a general method for encapsulating packets of arbitrary IP protocols within a UDP tunnel. GUE provides an extensible header format with optional data. In this release, the ability to encapsulate MPLS over GUE packets of variant 1 header format has been added. 

MRU (maximum receive unit) enforcement provides the ability to drop frames that exceed a configured threshold on the ingress interface.

EOS supports Multicast Source Discovery Protocol (MSDP) peering over TCP. Previously, MSDP sessions in EOS did not provide a built-in TCP-level authentication mechanism, leaving the MSDP TCP connection susceptible to spoofed or injected TCP segments (e.g., forged FIN/ACK/RSTs).

The TCP MSS clamping feature involves clamping the maximum segment size (MSS) in the TCP header of TCP SYN packets if it exceeds the configured MSS ceiling limit for the interface. Clamping MSS value helps in avoiding IP fragmentation in tunnel scenarios by ensuring that MSS is small enough to accommodate the extra overhead of GRE and tunnel outer IP headers. One of the most common use cases for this feature is connectivity towards Cloud providers via GRE which require asymmetric routing (for example DDoS protection).

While migrating from PVST to MSTP, or vice verse, the network engineer may choose not to run MSTP throughout the

EOS supported two routing protocol implementations: multi-agent and ribd. The ribd routing protocol model is removed starting from the EOS-4.32.0F release. Multi-agent will be the only routing protocol model. Both models largely work the same way though there are subtle differences.

This feature provides the ability to interconnect EVPN VXLAN domains. Domains may or may not be within the same data center network, and the decision to stretch/interconnect a subnet between domains is configurable. The following diagram shows a multi-domain deployment using symmetric IRB. Note that two domains are shown for simplicity, but this solution supports any number of domains.

Multi hop BFD  allows for liveness detection between systems whose path may consist of multiple hops. With an

TOI 4.20.1F

Until now, a multi-band client (for example, a phone with 2.4, 5, and 6 GHz radios) could connect to an Access Point (AP) using only one of the bands. Therefore, only one connection link is formed between the client and the AP. Multi-link Operation (MLO) is the capability of the client and the AP to connect to more than one band simultaneously, thereby establishing multiple links. Clients that can connect to the Access Point over multiple radio links simultaneously are called Multi-Link Devices (MLD).

Until now, a multi-band client (for example, a phone with 2.4, 5, and 6 GHz radios) could connect to an AP using only one of the bands. Therefore, only one connection link formed between the client and the AP. Multi-link Operation (MLO) is the capability of the client and the AP to connect to more than one band simultaneously establishing multiple links. The clients that are capable of communicating with each other over multiple radio links at the same time are called Multi-Link Devices (MLD).

This feature adds the support for OSPFv3 multi-site domains (currently this feature is added for IPv6 address family only) described in RFC6565 (OSPFv3 as a Provider to Customer Edge Protocol for BGP/MPLS IP Virtual Private Networks (VPNs) ) and enables routes BGP VPN routes to retain their original route type if they are in the same OSPFv3 domain. Two sites are considered to be in the same OSPFv3 domain if it is intended that routes from one site to the other be considered intra-network routes.

The Multi-vCenter VM Support in Single Policy feature enhances scalability and configuration management by allowing the inclusion of Virtual Machines (VMs) from multiple vCenters within a single policy. Previously, integrating a large number of vCenters with a single DMF fabric required a separate policy for each instance. With this update, DMF supports configuring match rules to include multiple VMs across disparate vCenters, unifying policy application and reducing configuration overhead.

In conventional VXLAN deployments, each MLAG pair of switches are represented as a common logical VTEP. VXLAN traffic can be decapsulated on either switch. In some networks, there are hosts that are singly connected to one of the MLAG pair. VXLAN packets destined for the singly connected host could land on the other MLAG peer and subsequently be forwarded over the MLAG peer-link to reach the destination host. This path is undesirable since it would use up some bandwidth on the peer-link.

In EVPN, an overlay index is a field in type-5 IP Prefix routes that indicates that they should resolve indirectly rather than using resolution information contained in the type-5 route itself. Depending on the type of overlay index, this resolution information may come from type-1 auto discovery or type-2 MAC+IP routes. For this feature the gateway IP address field of the type-5 NLRI is used as the overlay index, which matches the target IPv4 / IPv6 address in the type-2 NLRI. Other types of overlay index are described in RFC9136, but these are currently unsupported.

This solution allows delivery of multicast traffic in an IP-VRF using multicast in the underlay network. It builds on top of [L2-EVPN], adding support for L3 VPNs and Integrated Routing and Bridging (IRB).  The protocol used to build multicast trees in the underlay network is PIM Sparse Mode.