When you think of security analytics and operations, one technology tends to come to mind – security information and event management (SIEM). SIEM technology was around when I started focusing on cybersecurity in 2002 (think eSecurity, Intellitactics, NetForensics, etc.) and remains the primary security operations platform today. Vendors in this space today include AlienVault (AT&T), IBM (QRadar), LogRhythm, McAfee, and Splunk.
SIEM has greatly improved over the last 16 years but the underlying architecture remains similar. SIEM is composed of a data management layer designed to collect and process raw security data. Once the data is processed, it becomes available for upper layers of the stack for data analysis and actions like automated/orchestrated processes.
If you think about it, this architecture is common to other types of management platforms in the past – network management, systems management, service management, etc. – all with roots back in client/server computing (or earlier).
Fast forward to 2018 and I see a fundamental problem with the historical SIEM architecture – a rapid increase in data volume.
The story goes something like this: SIEM evolved side-by-side with log management, with the primary data source being log files. SIEM still collects, processes, and analyzes logs, but enterprise organizations now want the same data management services with other security telemetry such as NetFlow, PCAP, threat intelligence, vulnerability data, etc. This has led to a precipitous increase in the amount of security data under management. According to ESG research, 28% of enterprise organizations collect and analyze significantly more data than they did 2 years ago, while 49% collect and analyze somewhat more data than they did 2 years ago.
I know of a few enterprise organizations that now collect, process, analyze, and store petabytes of security data. Furthermore, they tend to keep this data around for longer periods of time than they did in the past.
Given this exponential increase in security data volume, organizations have two choices moving forward:
- Create and manage a massive on-premises security analytics architecture. Security analytics became a big data application about 5 years ago, and now it’s become a big big data application. Managing security analytics on-premises now requires a massive distributed data management layer capable of collecting, processing, deduplicating, compressing, and encrypting terabytes to petabytes of data.
- Move security analytics to the cloud. This usually involves some data collectors on-premises that then move all security data to the cloud for processes and analytics.
Option #1 is increasingly complex to build, expensive to maintain, and requires lots of overhead for day-to-day operations. Alternatively, option #2 provides the ability to throw cloud-based resources (i.e., storage, processors, etc.) at data collection, processing, and analytics, thus eliminating the need for the expense and operational overhead associated with on-premises infrastructure.
Now I know that enterprise security professionals demand control and would rather own and operate the whole enchilada than move sensitive security data to a third-party cloud. Nevertheless, with a persistent global security skills shortage, CISOs must be more selective on what they do and what they don’t do moving forward.
Do enterprises really want to build, maintain, and operate a complex and costly data management plane for security analytics or operations or do they simply want to focus their efforts on the actual security analytics and operations?
To me, the answer to this question is obvious so it’s safe to conclude that security analytics infrastructure will migrate to the cloud over the next 12 to 36 months. Security professionals and technology vendors should prepare accordingly.