At RSA Conference 2016, it was apparent to me that networking is performing a significant role in modern infrastructure security with visibility, context and enforcement for modern application workloads.
If the network is the nervous system for a data center, we can gain better insights into its health and operation by better network visibility, much like what electroencephalography (EEG) does with brain signals in medicine.
Of course we know that networks are critical for traditional uses: client/server communications, server/storage data transfer, and long distance communications for branch or internet access. In these traditional uses, the computational workloads or storage tended to reside on one side of the connection, and the network was used to access the results. In more modern workloads, the computation and data are distributed. Consider micro-services that split a program into services spanning many servers, and in some cases combining services that reside in the public cloud with those in a data center.
The network starts to take on a different role, acting as the glue for programs or workloads. It starts to resemble the role dynamic memory served for sharing data within a single computer. The memory in traditional programs serves as a buffer to transfer input (or parameters in a stack frame) between devices, procedures or processes. We had programming techniques for network access such as sockets or remote procedure calls, but programs were still structured to be a central workload. Now, as the programs get decomposed, the network increasingly serves to tie the elements together with a common foundation. Sun Microsystems’ ads once said that the network is the computer and this is becoming ever more true.
By examining and controlling the network, we can place better controls over program behavior, and gain visibility over their actions. Of course, we still need visibility within a computer, but we need to gain better understanding of the behavior over the network.
We had network analysis tools for a long time, with a variety of network packet capture, network packet brokers and related analysis tools. These need to evolve to better understand the traffic occurring within a data center (or between workloads), and apply analysis and correlation to understand the behavior of the data center in a holistic manner throughout the application stack.
A wide variety of companies provide technologies that provide the lower level support, augmented with higher level functions fo analysis. Network testing companies like Ixia now offer solutions for visibility. Software- and hardware-based approaches from firms like APCON, BigSwitch Big Monitoring Fabric, cPacket, Gigamon, Netscout, and Pluribus give insight. Higher-level functions provided by products from traditional networking vendors like Cisco’s Lancope or Juniper’s Sky Threat protection help complete the view. Open source projects such as OpenStack’s Tap as a Service (TaaS) extension to the Neutron network project, with contributions with programmers from companies like Ericsson) are also providing an community based alternative. This is just a smattering of the solutions available out there.
With such a variety of choice, it means standards are going to be important. We have flow records from formats sFlow and IPFIX (based on Cisco’s NetFlow), and they have been sucessful. Now we are looking at higher level metadata derived from these low level foundations so that a variety of solutions can gain more meaning from the raw data.
So call it what you want: the network is the new RAM or the network is the new program glue. In either case, it will provide the visibility to provide security and telemetry and insights for troubleshooting and analyzing programs.
Following RSA last week, I recorded a video of my thoughts on the conversations at the event:
Announcer: The following is an ESG On Location Video.
Dan: Hello, and this is Dan Conde from ESG. Last week, I went to the RSA Conference in San Francisco and have some observations about the increasing importance of networking. In the past, we had networking as a way to access programs, as a way to access storage, and obviously access the web.
But increasingly with the rise of microservices and having programs that combine services that are in a cloud along with those in a data center, it's apparent that the network is serving a purpose not too different than what memory did in the past when you had programs running on a single server. As you get better visibility into the network, you can understand how the programs are behaving, whether it's failing, or whether it's undergoing a security breach.
There are many companies, starting from traditional companies like Cisco with its acquisition of Lancope, or networking testing companies like Ixia, and network packing broker companies, and other network analysis companies like Gigamon, and others that are all approaching network visibility from a different angle. Whether they're creating whole ecosystems or whether they're creating a platform and whether they're creating tools that integrate with modern techniques, whether it's machine learning or big data analysis, to give you better insights into the network.
So this is not a new area. It's an evolution of an area that's undertaking a traditional approach to looking at new problems that are occurring in a much more networked data center and modernized applications. And I look forward to writing more about this on the ESG blogs in the months and years to come.