The Networking @ Scale conference is a different conference in that it is not run by a particular vendor (it was sponsored by Facebook but served the broad community in the networking industry) and aims to share networking solutions that can scale to serve apps and services that scale to millions and perhaps billions of users around the world.
First off, we all know that IPv4 is out of date, and the conference venue had Wi-Fi that was IPv6 only. Most people could connect to it, but many had problems with their VPN. It was a subtle hint to ask the vendors or IT groups to “get with the program”.
The conference attendees represented a varied set of roles: operators (Alibaba, Dropbox Netflix) and vendors (Cisco, Cumulus, Huawei, Juniper, Riverbed, etc.), competitors (Google and Facebook) and partners, US and international participants with a common goal to figure out how the largest operators are able to scale their infrastructures. Some were traditional networking vendors who wanted to see how these techniques will trickle down to traditional enterprises. Operators of services that are rapidly growing wanted to see how the lessons of hyper scale operators apply to their future.
One topic that is top of mind is to understand what is the scale where enterprises (let’s say large financial institutions) can benefit. Or perhaps the new scalable systems have become so easy to use that it can even go further downscale.
It’s hard for me to generalize. The classic answer is that “it depends.” A person who works at a major WANOpt and SD-WAN vendor said that enterprises seek an incrementalist approach. Lots of these new techniques are interesting but enterprises can’t swap things overnight and greenfield is less common than brownfield. Fair enough. I agree with that.
A chief scientist of a disaggregated networking OS vendor and I discussed what was missing. I mentioned that a common model to describe the intent of networks was missing, and he said that those models have always been an unrealized dream and we don’t have similar models for servers. Instead, he said that people create Puppet or Ansible or Python scripts to automate configurations across a fleet of systems. I sort of agree. What he said is the reality of the world today, but I have aspirations for something better.
A lot of these viewpoints are colored by where you are coming from. If you are a traditional vendor, you see the realities of enterprises making slow changes, which sets the boundaries of your world. If you come from a disaggregated network Linux vendor, then your goal is to bring the lessons of hyperscale operators to the enterprise, so the operations of large operators seem natural and are worth teaching to the enterprise.
I see the truth as somewhere in the middle. What I did enjoy the most are the practical stories told by the speakers. Dropbox was moving storage away from S3 to their own “magic pocket” storage system (they still are key partners of AWS, though), and they described how they designed the systems to perform a switchover live. These practical stories are the best, since they aren't set in an abstract world, but reflect actual work that is being done.
These conferences test the limits of what is possible, so even though lessons don’t apply to 90% of enterprises today, they do offer a glimpse of what may be possible in the future – in their levels of automation, or resiliency at scale.