
As of 2017, we are about 5 years into the availability of 100GbE and it has been an interesting road so far. Initial interest was primarily driven from telecom providers and that niche demand brought 100G to market in a fairly immature state. This somewhat premature rollout has left us with a surprising number of form factors this early in the lifecycle:
- CFP: The original 100GbE form factor, very large in size. Legacy at this point.
- CFP2: The first attempt to shrink a CFP optic to a more manageable size. Still used for long-haul or DWDM 100GbE optics.
- CXP: an early multimode or direct attach copper form factor. Still used occasionally in current production equipment.
- CPAK (Cisco proprietary): With the CFP2 form factor still too large, Cisco came out with their proprietary CPAK form factor to provide a smaller optic option. Still used by Cisco in their router lines and Nexus 7000/7700 linecards.
- MXP (Arista proprietary): Arista, in a similar position to Cisco, went with a proprietary fixed optic technology vs. a pluggable. MXP ports are 10/40/100GbE compatible. MXP ports can actually light up all 24 strands in the MPO cable, allowing for operation in 12x10GbE, 3x40GbE, or 1x100GbE mode. Arista still offers MXP port switches, but all new models use QSFP28 optics instead.
- QSFP28: The QSFP28 optic is the most recent form factor for 100GbE. Unlike the other form factors above which use 10x 10Gbps lanes, the QSFP28 uses 4x 25Gbps lanes, which makes for a much less complex (and hence smaller) optic. The QSFP28 is the same design as the original QSFP, just with faster lanes, so availability and cost reduction have come at a rapid pace with this form factor. Additionally, because of this similarity, QSFP28 ports on switches support 40 and 100GbE modes for easy backwards compatibility.
As the demand has spread and standards evolved I think we now have a relatively clear path of where we are headed. There are a couple dividing lines here though which will continue to impact how these are deployed.
One major consideration here is the existing cable plant when we’re dealing with multimode fiber runs. Native 100G (or 40G for that matter) doesn’t work with a standard 2 strand OM3/OM4 cable plant, instead requiring MPO cables due to the multilane nature of the optics. This has led to some creative (and proprietary) 40GbE optics in the past, but for 100GbE over multimode, a MPO based cable plant is required. One caveat here is that that we have two different SR standards. There are both 10-lane (CFP, CFP2, CXP, CPAK) and 4-lane (QSFP28) optics, which have different cabling requirements. The SR10 optics require a 24 lane MPO cable, while the SR4 optics use a 12 lane MPO cable. Imagine an ASR9K (which only supports SR10 multimode optics) router needing to interconnect over 100G to an Arista switch within a datacenter (which likely only supports SR4 optics). SR would be the obvious choice here but when going from CFP to QSFP LR4 would actually need to be used due to the two different multimode specifications.
The low interface/high speed requirements at the edge still exists and while that has not been broadly adopted at the enterprise level I’d expect that to evolve over the long term. What will really move the ball forward is the data center. Common rack and blade servers are still largely built around 10G architecture and while another ten-fold jump is technically possible the next step won’t likely be that large. And this is where we get to 25Gbe which will likely be the next step in the DC.
For example, let’s use the Arista 7060CX-32S (N9K-C9236C is similar as well) - a 1RU switch with 32x QSFP-100G ports. Assuming some 100G uplinks/cross connects you’d still have around 28x 100G ports available to be used with breakout cables to connect to 112x 25G NIC ports in highly virtualized servers. That is a massive amount of throughput in a 1.75” space.
Although these 100G switches are now shipping I would still say we have a few years to go before there is widespread appeal. For one, latest gen servers with dual high-core processors aren’t really saturating existing 10Gbe infrastructure. But, next gen servers from Cisco, Dell, and HP will be coming late next year and they will be available with even higher core counts along with mainstream 25GbE NICs. More cores will lead to more VMs and that will be the driving force of this next generation of hardware.
I’m never a fan of the bleeding edge since you typically just pay to be a guinea pig. I expect 10G to be very relevant for the rest of this decade but there is a somewhat predictable future to plan for this transition. If you have any projects on the horizon that you’d like to discuss specifically don’t hesitate to reach out. Lastly, I love getting feedback and anecdotes from personal experience so feel free to give me your take on the subject.