The Arista 7500 is built with the latest in silicon, systems, and software technologies. This section gives an overview of several of the key technologies in use within the Arista 7500:
Low-Latency Packet Processor
The Low-Latency Packet Processor is a processor which has eight wirespeed ports of 1/10Gb Ethernet, very large buffers, and connects directly to the Arista 7500 Low-Latency Virtual Output Queue (VOQ) fabric. This packet processor supports 160Gb of throughput, processes packets at data link, network, and transport layers (L2-4). Forwarding decisions are made at Layer-2 and Layer-3, protocol level policies such as ACLs and QoS are applied based on L4 information. 384MB of packet buffer is attached to the packet processor to smooth out congestion and prevent frame-drops. This is an unprecedented buffer density and is large enough to support multiple servers or virtual-machines per port with full TCP windows stored in buffer.
There are six low-latency packet processors per Arista 48-port 10GbE Linecard Module, resulting in a linecard module with significantly less power draw, and significantly more performance than other Ethernet systems.
10GbE, or 10 Gigabit Ethernet
The IEEE first published the 10GbE specification in 2002 as IEEE Std 802.3ae-2002. 10GbE is currently the fastest of the Ethernet standards, although 40GbE and 100GbE are in development. 10GbE is only supported in full-duplex mode, half duplex and CSMA/CD are not supported.
The IEEE 802.3 standards relating to 10GbE have been continuously enhanced with support for new media types over the past several years: 802.3ae-2002 (fiber -SR, -LR, -ER and -LX4 PMDs), 802.3ak-2004 (-CX4 copper twin-ax InfiniBand type cable), 802.3an-2006 (10GBASE-T copper twisted pair), 802.3ap-2007 (copper backplane -KR and -KX4 PMDs) and 802.3aq-2006 (fiber -LRM PMD with enhanced equalization).
Low-Latency Lossless Fabric
The Arista 7500 uses a store-and-forward fabric that is based on a virtual output queueing mechanism with arbitration, overspeed, and significant buffer capacity that ensures Ethernet traffic can be forwarded losslessly across the switch fabric. Even in a congested environment the Arista 7500 will not drop traffic, instead absorbing the congesting traffic into 384MB buffers on each packet processor, then providing back pressure with Priority Flow Control only when the buffer is filling. The fabric support variable length frame sizes and also the ability to spread large packet sizes across multiple fabric links to distribute load while ensuring in-order frame delivery.
The Arista 7500 switch fabric delivers 648Gbps per Linecard Module. This is 648Gbps Ingress, and 648Gbps Egress, concurrently, or 1.25Tbps aggregate full-duplex forwarding capacity per slot. Unlike other companies, Arista does not double-count per-slot numbers, nor does Arista claim performance numbers based on guesses about future switch fabric performance and signal-integrity suppositions about passive backplanes. The Arista 7500 delivers 10 Terabit per second of aggregate switch fabric capacity today.
40GbE and 100GbE
The Arista 7500 is designed to provide enough switch fabric performance to enable usable densities of 40GbE and 100GbE as these technologies are ratified by the IEEE and come to market. In Ethernet one or two port modules in modular switches are sometimes called 'Hollywood Modules' designed to showcase that the technology is deliverable and gain bragging rights; however, until four ports or more of density is cost-effectively achieved the new technology has never taken off in the market.
Arista's goal in conceiving the Arista 7500 is to make 40GbE and 100GbE usable technologies for our customers without requiring any components in the system to be replaced - simply install a new linecard module and deploy a new wave of Ethernet technology.
Data Center Bridging
Data Center Bridging (DCB) refers to enhancements to Ethernet for use in the data center. The Data Center Bridging (DCB) Task Group of the IEEE 802.1 Working Group is responsible for setting the DCB standards. Traditional Ethernet was designed to be a best-effort network, and under certain circustances could drop packets or deliver packets out-of-order. Generally lossless deliver was considered an imperative of the transport layer such as Transmission Control Protocol (TCP).
Data Center Bridging is a collection of four IEEE standards that enable Ethernet to provide reliability without incurring the penalties of TCP. With the move to 10 Gbit/s and faster transmission rates, there is also a desire for higher granularity in control of bandwidth allocation and to ensure it is used more effectively. Beyond the benefits to traditional application traffic, these enhancements make Ethernet increasingly applicable for storage and other loss and latency sensitive traffic.