ESG's Christophe Bertrand reviews ESG's new Backup Data Transformation Model.
Read the related ESG Blog: From Backup to Intelligent Data - Introducing a New Maturity Model
Hello, my name is Christophe Bertrand and I am the senior analyst at ESG focused on data protection. Today, I would like to share a very important new maturity model, the backup data transformation model. The data protection market, which broadly includes backup and recovery, disaster recovery, replication amongst others, is changing and is on the cusp of a major evolution.
Let's review the four stages that we have identified in our model. This will help both end users and vendors evaluate their current position in a journey from data backup to autonomous data intelligence and whether they can cross the data management chasm. There are four stages.
And the first two, baseline and cloud-enabled, are separated by, what we have coined, the data management chasm. The other two are data intelligence and autonomous. On the left-hand side of the model, the focus is placed on IT and supporting IT production with data and application backup and recovery as the core objectives. IT leaders will tend to zoom in on where the backup data lives and how much it costs and how to leverage newer approaches to support data center modernization and IT infrastructure transformation.
Let's review the baseline stage in more detail. It corresponds to on-premises workloads protection. It is the traditional backup and recovery space. One in which software, appliances, physical or virtual, and some services support the desired KPIs, which are the recovery point objectives, RPOs, and recovery time objectives, RTOs, of organizations that leverage these technologies.
In this stage, we see a complex and go-to-market focused ecosystem of alliances and technical integrations which has taken many years to evolve. The market is changing rapidly with the rapid adoption of cloud-based technologies to extend the traditional on-premise environments. In a context of backup and recovery, it means solutions and technologies that, in time, will seamlessly and natively leverage public and private cloud infrastructures protecting workloads to the cloud, in the cloud, and across clouds and of course, on-premises too.
All of this requires significant orchestration to coordinate data movements, application, and virtual machine fail-over or restarts, disaster recovery run books, and many other aspects associated with disaster recovery. This stage is in constant evolution and still offers many opportunities for vendors to improve their capabilities to help end-users achieve coherent RPOs and RTOs in what has become a hybrid infrastructure.
I predict that these two stages will merge in the next few years. In other words, technology and solutions will evolve to make on-premises and cloud-enabled data protection more seamless. There is still plenty of work ahead in particular for the protection of SAS based applications, for example. As you can see the stages on the left of the chasm are still evolving and offering many challenges to IT leaders to deliver coherent and predictable service levels.
The infrastructure tends to evolve as a reaction to changes in other parts of the environment such as the adoption of new platforms that now need to be backed up. Hybrid data protection even when fully and natively cloud-enabled is still, how shall we say, a bit dumb. What I mean here is that while there's some granular level of understanding of what data is backed up, where it went, how old it is, etc, backup data is not really portable across a solution.
It's not easily reusable and it offers very little insight into the data itself. This is where a fundamental change is happening. The requirement for context and content about the data is becoming more acute as new regulations and the need for use of data to support digital transformation, for example, is changing the role of data in the enterprise.
Data has to be more intelligent. It's really about business outcomes and the notion of data as a true asset that can be leveraged to create a return on investment or avoid costs and risks. Many vendors talk about data management, but no one has truly defined what this means. It really should be called intelligent data management. Meaning that beyond backup and recovery use cases, the solutions or systems performing these operations can also provide insight into the data, understand the context and the content of it, and deliver management capabilities.
One simple example is the classification of data - knowing that you have data, where it is, where it lives, and performing masking operations, for example. In this new stage, we expect to see new players with specialized solutions and new approaches solving customer problems, thinking more in terms of data processes rather than data movement, and storage.
We expect the modalities of delivery to be workload specific originally and probably vertically focused. We also expect to see artificial intelligence and machine learning get integrated and enrich the management and quality of the management processes. This takes us to our fourth and last stage, one that I call autonomous intelligent data management.
Just like intelligent data management, the solutions are designed to be preventative rather than reactive. The acceleration of AI and ML allows for processes to be highly automated and minimize human intervention. This, in turn, offers significant opportunities for higher efficiencies, better service levels, and overall improvements of the quality of the management of the data itself.
This model can be used by end users and vendors alike to establish their levels of maturity in each of the stages and their strategic objectives on this journey to autonomous and intelligent data management. For more information on this topic, please contact us at ESG. Thank you very much for your time.