As data center managers look to the future, signs of cloud-based growth are everywhere: higher-performance virtualized servers, higher bandwidth and lower latency requirements, and faster server switching connections. Whether you're running 10Gb or 100Gb today, the transition to 400Gb may be faster than you realize.
Many factors have led to the demand for 400G/800G or even higher. Data centers have moved from multiple disparate networks to more virtualized, software-driven environments. At the same time, they are deploying more machine-to-machine connections and reducing the number of switches between them. Most of the components that will support the cloud-scale data centers of the future already exist. What is missing now is a comprehensive strategy to create a physical layer infrastructure that can connect all these facilities together.
So, how do we do this?
Transceiver
Jumping to 400Gb requires doubling the density of your current SFP, SFP+, or QSFP+ transceiver. This is where QSFP-Dual Density (QSFP-DD) and Octal Small Form Factor Pluggable (OSFP) technologies come into play. Both use 8 channels instead of 4 channels, and 32 ports can be installed in the 1RU panel. The only notable difference is backward compatibility and power handling capability. The MSA mentions different optical connection options: LC, mini-LC, MPO 8, 12, 16, SN, MDC and CS connectors can be selected depending on the supported application.
Connector
Multiple connector options provide more ways to allocate higher octal capacity. The 12-core MPO was traditionally used to support 6 dual-core channels. For many applications, such as 40GBase-SR4, where only four channels (eight fibers) are used, MPO16 may be better suited for use with octal modules. Other connectors worth investigating are SN and MDC, which use 1.25 mm intercalation technology and offer flexible routing options for high-speed optical modules. The ultimate question will be which transceivers will work with which connectors in the future.
Chip
Of course, there is no limit to the demand for bandwidth. Currently, the limiting factor is not how much you can plug in front of the switch, but how much capacity the chipset inside can provide. Higher cardinality combined with higher SERDES speeds results in higher capacity. The typical approach to supporting 100Gb applications around 2015 used 128 - 25Gb channels, resulting in 3.2Tb of switch capacity. Reaching 400Gb requires increasing the ASIC to 256 50Gb lanes, resulting in 12.8Tb of switch capacity. The next development — 256 100Gb channels — takes us to 25.6Tbs. Future plans to increase channel speeds to 200Gb are being considered — a daunting task that will take years to perfect.
Copperled as one of the professional structured cabling manufacturers with over 16 years in the industry, will be never stopping exploration and development of emerging products.
Whether you're looking for 10GB or 100GB fibre optic systems, just call us. The Copperled team will serve you best.
Thanks and Best Regards!
International Marketing Dept.
Shenzhen Copperled Technology Co., Ltd.