The concept of SDN evolves over time. At the beginning, it was all about OpenFlow and many switch vendors adopted OpenFlow support in their devices. Later on, network virtualization overlays became popular, and characterized by products such as Cisco ACI, Juniper Contrail (and OpenContrail), Nuage Networks VSP, and VMware NSX, and open source projects such as OVN.
At the same time, the idea of what SDN is defined as continued to fragment, where simple ideas of network automation started to blur into what SDN means. Some people believed that treating infrastructure as code is a form of using SDN but that was usually an exception than the rule.
I visited a meetup hosted by Tigera and Red Hat on the topic of “OpenShift overview and Calico Networking/Policy” and got reacquainted with Calico networking, which is a popular solution that plugs into Kubernetes’ Container Networking Interface. It’s used widely for stitching together networking for those deploying containers orchestrated by Kubernetes (K8S).
The approach of Calico’s networking is to not use traditional network virtualization overlays (although it can coexist), but to use regular Layer-3 networking, and use well known systems like BGP to scale to a large number of containers. After all, BGP can scale to the entire Internet, so why not for your container pods? And plain networking avoids some of the overhead of using encapsulation. All it needs to do is route the packets as IP networks have always done.
This led me to reconsider some of the earlier religious arguments. Should SDN use a controller or no controller; intelligence at the edge or at a central controller; support in silicon or all in software; overlays like VXLAN, GRE, or GENEVE? I too was caught up in this mix, and got into arguments on the merits of one approach or another.
After viewing some of these “non-virtualization” approaches to SDN, my thinking has evolved. I do still believe that network virtualization (overlays are not a new idea as they goes back to VLANS and MPLS) is a critical method for separating logical and physical networks, and is a way to address traditional Layer-2 networks.
Do these modern container networks that do not use virtualization reflect a regression of sorts? No. It’s a way to tailor a network to the particular performance, scale, and security requirements of a set of workloads (and the related infrastructure).
The different systems can coexist, and are not necessarily mutually exclusive. Plain old networking such as IPtables are mature, well understood, and a great way to stitch together endpoints. What really matters is the conceptual model of networking that IT organizations need to use to configure their infrastructure. That may be done via scripts such as Chef cookbooks, or via direct UI. The details of how it is done are not super relevant, but as long as you create proper models, things will sort out. Using ACI End Points Group may be fine, but ultimately they serve to meet the needs of IT policy.
The right model enables the deployment of workloads and the interconnections, and that in turn supports the business goals and requirements. A business rule does not care about encapsulation methods or controller configuration. Whether the traffic flows properly to meet a regulatory requirement is a critical thing, and that’s what we ought to think about.
I hope to write more about networking models in the future.