Without your data, you don’t have BC/DR, you have people looking for jobs.
But that does not mean that if you have your data remotely, you have a BC/DR plan. Having “survivable data” means that you have the IT elements necessary to either roll up your sleeves and attempt to persevere, or (preferably) the means by which to invoke a pre-prepared BC/DR set of mitigation and/or resumption activities.
BC/DR is not a “feature” or a button or a checkbox in a product, unless those elements are part of invoking the orchestrated IT resumption processes that are part of a broader organizational set of cultural and expertise-based approaches to resuming business, not just restarting/rehosting IT.
July 2014 saw the fourth EMC “MegaLaunch,” featuring a broad swathe of announcements across EMC’s portfolio. While the range of news - and associated materials - to consume can seem daunting, this 8 minute “On Location” video blog (featuring ESG analysts Jason Buffington, Terri McClure and Mark Peters) will give you some key headlines and commentary in a very efficient and easily digested manner….
Last week, in London, EMC made several announcements – many of which hinged on the VMAX3 platform – but the one of most interest to me was ProtectPoint, where those new VMAX machines will be able to send their backup data directly from production storage to protection storage (EMC Data Domain) without an intermediary backup server.
I mentioned this in my blog last week as an example of the fact that, while “backup” is evolving, those kinds of evolutions require that the role of both the Backup Administrator (which should not be thought of as a Data Protection Manager/DPM) and the Storage Administrator (or any other workload manager that is becoming able to protect their own data) need to evolve, as well.
When asked “what is the future for data center data protection?” my most frequent answer is that DP becomes less about dedicated backup admins with dedicated backup infrastructure … and more about DP savvy being part of the production workload, co-managed by the DP and workload administrators.
To be clear, as workload owner enablement continues to evolve, the role of the “Data Protection Manager” (formerly known as the “backup administrator”) also evolves – but it does not and cannot go away. DPMs should be thrilled to be out of some of the mundane aspects of tactical data protection and even more elated that the technology innovations like snap-to-dedupe integration, application-integration, etc. create real partnerships between the workload owners and the data protection professionals. And it does need to be a partnership, because while the technical crossovers are nice, they must be coupled with shared responsibility.
I recently had the opportunity to attend the Dell Annual Analyst Conference (DAAC), where Michael Dell and the senior leadership team gave updates on their businesses and cast a very clear strategy around four core pillars: Transform (cloud) ... Connect (mobility) ... Inform (Big Data) ... and Protect.
Protect?! YAY!! As a 25-year backup dude who has been waiting to see how the vRanger and NetVault products would be aligned with AppAssure and the Ocarina-accelerated deduplication appliances, I was really jazzed to see “Protect” as a core pillar of the Dell story. But then the dialogue took an interesting turn.
Last week, I published a video summary of the data protection product news from EMC World 2014, with the help of some of my EMC Data Protection friends. To follow that up, I asked EMC's Rob Emsley to knit the pieces together around the Data Protection strategy from EMC.
During EMC World 2014 in Las Vegas last month, I had the chance to visit with several EMC product managers on what was announced from a product perspective, as well as overall data protection strategy.
When you really boil down the core of IT -- it's to deliver the services and access to data that the business requires. That includes understanding the needs of the business, its dependencies on things like its data, and then ensuring the availability of that data.
"Availability" can be achieved in two ways = Resilience and Recoverability.
As usual ESG had a strong analyst representation at this year’s EMC World, held last week in Las Vegas. Watch this 6 minute video blog, to get a flavor of the event and to hear the key “takeaways” and initial high level insights from a broad spectrum of ESG experts – on the storage ‘beat,’ there’s Terri McClure and myself, for data protection there is Jason Buffington, and you can also see and hear from Kevin Rhone (channels/partners) and Kerry Dolan (ESG Lab).
ESG surveyed 353 North American IT professionals representing midmarket (100 to 999 employees) and enterprise-class (1,000 employees or more) organizations to find out about their organizations’ current usage of technologies and processes for storing at least some of their information on a long-term basis (i.e., at least three years). All respondents were personally familiar with the processes and technologies their organizations used to store/retain electronic information—such as documents, database records, e-mail messages, etc.—for long-term retention and reference, and all respondents had purchasing influence for these products and services.
It seems that every time a new major IT platform is delivered, backing it up is an afterthought – often exacerbated by the fact that the platform vendor didn’t create the APIs or plumbing to enable a backup ecosystem. Each time, there is a gap where the legacy folks aren’t able to adapt quickly enough and a new vendor (or small subset) start from scratch to figure it out. And for a while, perhaps a long while, they are the defacto solution until the need becomes so great that the platform vendor creates the APIs, and then everyone feverishly tries to catch up. Sometimes they do, other times, not so much.
With big data finally becoming mainstream and adoption growing in global enterprises in all industries, the requirements for resilience and robustness of the applications have increased. Today, there is an underserved need for more mature approaches to data protection and disaster recovery. Vendors that address these issues as part of their offerings will see greater acceptance by their customers’ IT operations teams, but everyone must do more to improve their capabilities for better reliability and business continuity.
One of the primary deterrents to most BC/DR plans is that recurring testing must occur in order to ensure preparedness for when calamity strikes and to prove compliance for those with regulatory mandates. But testing in general can be not only arduous due to the complexity of bringing replacement systems online, but also risky in that doing so without proper preparation carries the possibility of affecting the primary systems, which are actively serving users. This has historically led to infrequent or even non-existent recovery testing. How—if at all—do cloud-based disaster recovery services change this dynamic?
There has never been so much corporate data outside of the data center as there is now. It is due to the changing usage of endpoint devices, particularly by users in bring-your-own-device (BYOD) environments. Too often, IT tries to utilize complex legacy data center backup approaches to protect these modern endpoints, with the result being that endpoints and all of the corporate data residing in them are left unprotected. But it doesn’t have to be that way.
Jason Buffington focuses primarily on data protection, along with Windows Server infrastructure, management and virtualization. He has concentrated on data protection and availability technologies since 1989 and has been a Certified Business Continuity Planner (CBCP), a Microsoft Certified Systems Engineer and Trainer (MSCE/MCT), and a Microsoft MVP in file system and storage solutions.
© 2014 by The Enterprise Strategy Group, 20 Asylum Street, Milford, MA 01757 508.482.0188
Enter your email address, and click subscribe