At the Open Compute Summit, we saw some exciting announcements, with Microsoft's new SONiC network OS, Google joining the Open Compute Project (OCP), and Equinix announcing it will join and use an OCP network switch, the project is gaining even more support. The question is whether or not this set of technologies originally designed for hyperscale data centers will benefit regular enterprises in the future.
In the photo, you can see Facebook's 6 Pack Switch, a modular switch with 12 independent switching elements. It's painted in Facebook blue, but I didn't notice any thumbs-up "Like" icons on it.
The Open Compute Project's goal is to create an open standard for hardware design that benefits from the open design philosophy that helped create open-source software and the Internet. The result is a set of specifications for "OCP gear" that is based on open standards.
The Open Compute Project is supported by a large number of service providers — ranging from cloud operators like Google, Facebook, and Microsoft; telecoms like AT&T and Deutsche Telekom; telecom equipment makers such as Ericsson and Nokia; mainstream IT vendors such Cisco, Dell, HP, and Juniper; and startups in networking such as Big Switch Networks, Cumulus Networks and Pica8.
The question is whether this technology can be used by regular enterprises. We see support from large financial institutions, which has the large scale to benefit from this technology.
Small enterprises may not possess the skills to use these systems that require a fundamentally different way of operating them. HPE believes that this is a gradual transition that will trickle down from the hyperscale vendors and telecoms to regular enterprises in a journey that will take years.
But we've seen this type of transition before: mini computers were replaced by x86 servers, and while only forward-looking enterprises initially adopted them, those systems are now considered the standard throughout the IT industry. HPE is fully supporting these efforts, with hardware and software that are based on open standards and contributing software such as OpenSwitch.
Many software projects blooming
I do notice that open community projects beget similar open community projects. In addition to OpenSwitch, we now have Microsoft's SONiC, Dell's OS10, and Open Networking Linux from OCP itself. I expect these projects to cross pollinate each other with ideas, and that there will be some consolidadtion in the future. And not all of them will be commercially sold, but will be used internally by its creators (for example, Microsoft's SONiC).
But I don't think it's worthwhile discouraging have these efforts early on. Different ideas and different architectures are worth trying out in the wild, and settling on one standard early on may stifle innovation. There is another set of projects (like SAI and Switchdev) which offer alternative ways to abstract the hardware in a network switch or through the Linux kernel so that operating systems can easily use switches built on network silicon from Broadcom, Cavium, Mellanox or Barefoot networks. Both of these efforts will help bring customers more choice.
The needs of these users vary, and the building blocks to meet their needs are starting to come into place. Some are focused on performance, and companies such as Mellanox are focused on fast networking (100G Ethernet), and related technologies (Remote Direct Memory Access or RDMA) will be ready to provide related technologies.
Some companies have a desire to treat their networking switches just like servers, and a deployment model like Cumulus Networks' Cumulus Linux distro fits the bill. Other networking vendors like Pica8 focus on porting their PicOS network OS on multiple networking processors, so end-users who want a choice of different platforms will find freedom, as there are a multitude of ODMs which manufacture compatible network switches.
Big Switch Networks has a white box switch OS, but is focused on creating a network fabric with a controller, somewhat like Juniper (QFabric) or Cisco (ACI), so they provide an white box OS that supports this design.
Mainstream networking hardware makers ranging from Dell (ON-series), HPE (Altoline), and Juniper (OCX) have OCP switches, for those who want brand-name devices.
On the server side, Penguin Computing and others have OCP-compatible systems that can be built to order. Component and device makers such as Sony and Panasonic were demonstrating archival systems to fulfill the data protection needs.
It's apparent that the core technologies are starting to be ready. However, when will the enterprise have the skills to operate them?
Facebook, Google, Microsoft and other large operators have highly-skilled employees who have created their own processes and tools for operating very large data centers. Furthermore, their apps and data center design are significantly different from standard enterprises.
Even if their processes can be transferred to enterprise IT staff, they will find a challenge adapting these methods to legacy workloads and smaller data centers that are fundamentally different than what one finds at places like Microsoft Azure.
We need to wait for an eco-system of tools and processes to help enterprises adopt these new technologies. Large financial institutions do have the scale to start using these systems, and some of them were present at the conference.
But for other enterprises, where phrases such as "Layer 3 to the server" or "pod-based networks" are not everyday phrases, it will take some time to deploy these systems widely. But I wouldn't be right to dismiss them as "future science projects", as these technologes will be used in different sizes of organizations over time.
It's worth it for companies that can spare the staff to evaluate these systems. Consider trying them out in greenfield environments. It's a great way to learn and experiment with new DevOps-style deployment and management processes. When the technologes are ready for serious consideration, you will have staff who are knowledgable. After all, it's the people and processes that take the longest to change.
With large IT companies supporting these systems, it's not totally a DIY endeavor either. The journey will take a while, so why not get an early start?
I recorded a video blog about the Open Compute Summit as well:
Woman: The following is an ESG On Location video.
Dan: Hello, and this is Dan Conde of ESG. Last week, I went to the Open Compute Summit, which is a conference organized by the Open Compute Project, originally started by Facebook, but now supported by a wide variety of IT vendors, including Hewlett-Packard, Dell, Microsoft, and so on.
The reason why I went to the conference is to find out whether or not these technologies, originally designed to be used by hyper-scale data centers like Facebook and Google, can by used by regular enterprises in the future. Companies like Hewlett-Packard believe that there's a trickle-down effect where technologies at the very large scale would eventually find its way to large financial institutions and eventually down to the regular enterprise.
I think it's important to realize that it's just not the technologies that matter, but the realization that operational processes and so on, used quite effectively by companies like Facebook and Google, can be adapted to be used by regular enterprises.
We see that there's a large number of financial institutions at the conference who also believe that they too can benefit from the same technologies. But they have a scale that's quite large compared to regular, smaller-size enterprises, so there is still a gap. But I don't want to dismiss this technology as something that's only useful for the very largest companies, and like many other technologies, will eventually find its way down to the regular-size enterprises. So it's worth keeping an eye on this technology.