Analyze This!

Analyze This!

As cloud computing continues to mature and in many ways (IaaS) become more of the utility that Nick Carr and others have envisioned – it is time to move up the stack – don’t you think? Or at least consider how PaaS and SaaS are developing as the next waves of cloud computing.

To that end – there are a plethora of things going on in the PaaS world these days when it comes to analytics. And we’re not just talking the Big Data hokum that seems to be rolling off of everyone’s tongue but real down-to-earth analytics.

Pivotal

There is so much good stuff going on – I’m not even sure where to start – so I’ll start with the basic building blocks – or what may be a better name - frameworks. For PaaS to really take off it is going to need some thought leadership that understands what services are going to be required by applications that can serve the needs of big data (yup I said it), mobile, and webscale applications. To be successful I think there is too much at stake to not create a framework that can also service the needs of the traditional business application as well. This is where I think Pivotal is well positioned to provide the services needed for application developers to migrate both traditional and develop cloud-age applications using the same framework. This allows for improved fidelity across the business' application set, reduction in learning curve/training needed, a potential for easing EOL and migration from legacy to cloud-age apps, and, of course, an opportunity to reinvent the application portfolio with all kinds of new possibilities with analytics, prediction, harvesting large data sets, etc. In time, technology like Pivotal’s services will become the fabric of some fairly large cloud (and likely hybrid) applications and with their analytics expertise and products expose many new opportunities that potentially allow businesses to become more efficient and grow as well.

Tier3

Tier3 is a service provider and a manufacturer of a cloud service management software system. What makes them special in my mind is that they also built a Cloud Foundry-based PaaS layer using the open source Iron Foundry and extended it to include .NET and Hyper-V – which allows their PaaS platform to approach seamless integration between disparate OSs, dev platforms, and hypervisors. Since companies rarely have only one dev platform, and let’s face it - .NET is in a lot of data centers, Tier3 is uniquely positioned to become the fabric for any company that doesn’t want to be siloed to all Linux or all Microsoft.

Sumo Logic

Moving to specific analytics engines - and there are many – there are a few that I tend to follow and find interesting because some of them spend some efforts on looking at solving cloud and data center problems. Sumo Logic is built on a cloud-based platform that can collect machine and application data and, through a function they call LogReduce (they can filter the large data set down from a high noise level to a reasonable subset and use pattern based analytics), exposes a manageable set of anomalies for an operations person to focus on. With both fixed and unstructured data fed into the system – the operations staff only have to know how to ask questions related to the data that come to mind once they are shown the patterns. These folks already apply their methods to the machine data and applications logs to learn about service issues as well as security events.

VMWare

VMware continues to find new ways to strategically leverage their vCenter Log Insight product--it is another analytics engine that can ingest machine log data, application logs, networks traces, config files, etc. Once Log Insight has ingested the data the operations personnel can search it, run analytics against it, and generate reports across the whole stack from the application down. Now that EMC has written the EMC Storage Analytics (ESA) plug-in – customers can drill down or drill up from end to end for EMC Storage as well. These are very powerful tools that systems folks haven’t had access to in the past and I believe will greatly enhance time to repair, predictive problem avoidance, etc.

CloudPhysics

Now for a completely different approach – there is a new kid on the block called CloudPhysics. What makes them different is that they are SaaS-based, they collect your data and all their other customers' data (fully protected and anonymized), and apply analytics with a collective intelligence approach. Having worn the shoes of a data center manager – and having faced the outages, performance issues, and unplanned workloads – I know what it’s like to feel helpless and often even blind to where the problems are. CloudPhysics allows the operations team to track changes, from changes that need to be made in the future to changes that occurred recently. For future changes – CloudPhysics analyzes the current change bulletins against the installed configurations. For the past, changes are captured and can be reported on. What makes it interesting is correlating performance against changes and their unique ability to perform what-if analysis with specific configurations. Like what happens if I tweak the VM reservation on an HA configuration on one node – and that node fails, what does that do to the other nodes in the config? Can they handle the new VM specifications?

Also since the data is aggregated across all the customers’ data – the operations team can use apps which they call ‘cards’ to look at how other configurations are performing. For example, if another customer has a config that includes SSD with the same blade and VM infrastructure – we can run a card look at what impact SSD had on the application's performance. Or even do a what-if with SSD to try and decide do we need it for the whole data center or only specific nodes. The impact of this not only saves time – but also money not spent where technology doesn’t provide the ROI.

CloudPhysics isn’t sitting still. They have a quickly growing community of customers that are creating cards for all kinds of new ways of looking at data center problems. Watching the demo of the card functionality I was reminded of how easy Tableau is to use for creating queries and reports – really just a drag and drop and the data set is created in seconds. CloudPhysics allows anyone to sign up and try CloudPhysics out without spending a penny.

Want to predict your next outage or performance problem? How about keeping ahead of patches and security updates? Thinking about buying SSD? Want to consolidate systems that are virtualized in silos to resource pools? After seeing CloudPhysics – these are the right questions that used to be a good guess at best. And if you still like guessing – use the what-if capabilities – and guess away until you find a set of answers that make sense with regard to running your business.

Summary

These are just a few new and interesting tools available to the operations teams today. With the roles in the data center shifting to a centralized stack management approach – the tools have to become centralized and provide a holistic view into all the layers of the stack running in the data center. The good news is these new generation of tools can be specialized for the virtualization of cloud systems, or as frameworks that work heterogeneously across different OSs and hypervisors, or tools that really take a data centric view and provide a collective intelligence capability that changes the way we think of solving problems. I think it is time to not only analyze this – but analyze the data center and have IT become able to proactively make the business information assets more efficient and more competitive.

Topics: Data Platforms, Analytics, & AI Cloud Services & Orchestration