IT directors understand that private cloud datacenters give organizations full control over their workloads. However, adding new workloads, or scaling capacity for existing workflows, can trigger a costly CAPEX event and take considerable time. Conversely, virtual public cloud implementations can be more granular, while offering the promise of near limitless expandability, on demand, on a pay-for-use basis. For these and other reasons like easy adaptability, quick time to deployment, and ready access to the global customer base, cloud-based applications have become a competitive necessity for organizations.

The advent of Cloud Computing changes the approach to data center networks in terms of throughput, resilience, and management. Cloud computing is a compelling way for many businesses, small (private) and large (public) to take advantage of web based applications. One can deploy applications more rapidly across shared server and storage resource pools than is possible with conventional enterprise solutions.

Hadoop and other distributed systems are increasingly the solution of choice for next generation data volumes. A high capacity, any-to-any, easily manageable networking layer is critical for peak Hadoop performance. Data analytics has become a key element of the business decision process over the last decade, and the ability to process unprecedented volumes of data a consequent deliverable and differentiator in the information economy

In the past few decades the most pervasive storage media was a dedicated Fibre Channel network that connected compute to storage. There really was no other choice; it was Fibre Channel or nothing. Recently, technological innovations in both storage and networking are leading most organizations to converge their infrastructure onto an Ethernet based IP fabric. The choice to converge is typically based on cost, performance and simplicity, however this document will not cover the sometimes political nature of this decision.

Collapsing hierarchical, multi-tiered networks of the past into more compact, resilient, feature rich, two-tiered, leaf-spine or SplineTM networks have clear advantages in the data center. The benefits of more scalable and more stable layer 3 networks far outweigh the challenges this architecture creates. Layer 2 networking fabrics of the past lacked stability and scale. This legacy architecture limited workload size, mobility, and confined virtual workloads to a smaller set of physical servers. As virtualization scaled in the data center, the true limitations of these fabrics quickly surfaced

Consumers are now heavily invested in mobile access for applications and content. This shift in consumption models is driving new business requirements and creating new challenges for Telcos, from increasingly high bandwidth Over-The-Top (OTT) traffic and competition from the cloud providers. Instead of competing directly with public cloud offerings, Telcos is instead adopting cloud principles to deliver its network services in a more efficient manner. Some are providing cloud connection services to their existing customers to provide secure VPN access to the public cloud.

For many years, one of the biggest challenges in network design has been effectively managing traffic flow end-to-end across the network. Stated more specifically: How is traffic intelligently classified and path engineered throughout the network? Furthermore, how can this traffic be classified into differentiated levels of service, without adding unnecessary complication to management, control plane or data plane state in this critically important part of the network?

It’s simply not good enough to have a great and scalable network alone. A data center can have tens of thousands of compute, storage and network devices, presenting a large operational challenge to IT. In addition, as the network is scaling, IT is being asked to reduce operational expenses and increase responsiveness to changing business needs.

Architecting a fault tolerant and resilient network fabric is only one part of the challenge facing network managers and operations teams today. It is simply not good enough to build a scalable fault tolerant network. Typical data centers can range from tens, to hundreds if not thousands of networking devices.

Performance, resiliency and programmability across the entire network are now fundamental business requirements for next generation cloud and enterprise data center networks. The need for agility and deployment at scale with regards to provisioning and network operations requires a new level of automation and integration with current data center infrastructure. The underlying design of the network operating system provides the architectural foundation to meet these requirements

The links above are for the viewer’s convenience, and Arista has not reviewed and is not responsible for their content.