April 23, 2026

The demands of large-scale AI training and inference are continuing to reshape data center architectures, driving the need for higher bandwidth, lower latency, and accelerated AI networking. AMD and Arista are collaborating to enable this new generation of accelerated infrastructure that seamlessly bridges compute and networking domains.

This introductory session introduces the AMD Pensando™ Pollara 400 AI NIC and its P4‑based architecture for high‑performance AI networking. You will also learn how advanced packet processing, UEC features, and deep telemetry help reduce CPU bottlenecks, manage congestion, and improve GPU job completion times. Together with Arista’s high-performance networking platforms and EOS network software, these innovations enable elastic scaling, efficient data movement, and improved visibility across AI and data-intensive environments.

Attendees will gain insights into joint solution architectures, integration strategies, and performance data demonstrating how AMD and Arista technologies combine to deliver higher efficiency, lower TCO, and accelerate the path toward new next-generation AI infrastructure readiness.