Paths to the Data Center Upgrade

At last year’s ECOC trade show and conference, much of talk at the conference sessions was about the replacement of 100G by 200/400G transceivers.  However, when one talked to end-users walking the show floor, a different story emerged heard.  Most data center managers whose job it typically is to oversee the smooth functioning of medium-sized data centers are not yet that much concerned with 400G and may not be for several years. In fact, users of 400G interfaces are currently overwhelmingly to be found at the hyperscale/cloud data centers.  Elsewhere, the focus is much more on transitioning from a 10G to 100G infrastructure or (to a limited but growing extent) from 10G to 25G infrastructure.

While technology is at the core of the 400G transition, this is not the case with the 100G evolution of data centers to 100G.  The development of 100G for the data center began about 15 years ago and for the most part 100G should be regarded as a mature technology.

This does not mean that the standards development for 100G are over and done with, but the standards making for 100G that is happening now is at the margins.  For example, in the past couple of years the IEEE’s P802.3 group, which governs the standardization of Ethernet, has concerned itself with adding some Physical Layer specs and management parameters for the full range of higher speed Ethernets including 100G.  Work has also gone on a standard defining a 100G PHY defining 100G over a single wavelength on a DWDM infrastructure and with a range of 80 km or more.

This sort of thing will matter more to a service provider contemplating next-generation metro networks than it will to the average data center manager. Data center managers in industrial firms, hospitals, and retails are less concerned with technical niceties than with the requirements for upgrading their data center architectures.  The information must flow! As we see it, the network manager now has two scenarios for data centers in the near future:

Add some connections to the existing infrastructure: Some network managers, after some reflection, may decide that their existing architecture is almost good enough and that more connections at current data rates will do the trick and without buying more switches.

This means is that they can put off a real upgrade for a couple years.  From a technology standpoint, this may be considered an example of hiding ones’ head in the sand.  But from a financial perspective, putting off purchases for a couple of years can yield substantial rewards.

The 25G/100G option: While the “do nothing” option just discussed may make sense in the short term (and even the longer term for some), most centers are going to shift to a 25G/100G option. Most obviously it is a shift from 10G  driven by the need for more bandwidth, typically to support video streaming or more flexibility in the rack.

Just a few years ago upgrading to 100G meant in practice replacing 10G links with 100G and 40G links.  Today 40G connectivity is in rapid decline and is being replace by n x 25G connectivity, which is less expensive and cheaper to implement than 40G links.  One 100G port can be broken up into 4x25G links, providing an option to connect two servers at 50G (2 x 25G) each.

In the “old days” 40G had to be run to each server from separate switch ports.  With the 25G/100G, only one switch port is used and connectivity at the server is 50G not 40G.  A double plus.  In the 25/100G upgrade there are 100G connections between switches and 50G (perhaps just 25G) to the servers.

Price and 100G:  Over the next few years, what we are looking at is a wholesale deployment of many 100G and 25G transceivers. While such transceivers are not especially expensive individually, in aggregate the transceiver bill is going to add up to a significant sum, making third-party sourcing of the necessary transceivers – something to think about.

I this context, one should remember that it is not just the transceivers that need to be swapped out.  In many cases, older switches are not designed to support the new 25G/100G environment and must be replaced and even the newer switches may not support enough 100G ports.  So new switches and probably some re-wiring for 100G will have to be budgeted for to get the new infrastructure right.

 

 

Scroll to Top