Most Recent Blogs

Dell Continues to Enhance its DR Deduplication and DL Backup Appliances

Posted: May 22, 2015   /   By: Jason Buffington   /   Tags: Data Protection, Dell, deduplication, backup appliances

data_backupEarlier this month, Dell announced enhancements to its DR series of deduplication appliances.  Deduplication appliances continue to be a common method of improving one’s overall data protection infrastructure since they can typically be added to whatever backup/archive software or method that you already have, while near immediately reducing the storage consumed in secondary copies.

Read More

Video Series on Data Protection Appliances – Part 2, Deduplication Appliances

Posted: May 01, 2015   /   By: Jason Buffington   /   Tags: Data Protection, PBBA, DPA, deduplication, backup-to-disk, data protection appliances

Data_Protection_AppliancesContinuing our four-Friday video series, based on the recent ESG research report on the Shift toward Data Protection Appliances,   

This installment covers deduplication storage appliances – those wonderful storage devices that drop in existing (or new) backup and archival infrastructures and automagically and radically reduce storage consumption through optimized protection storage.

Read More

Permabit SANblox: Fitting More Bricks in Storage Buildings

Posted: September 18, 2014   /   By: Mark Peters   /   Tags: Storage, IT Infrastructure, deduplication, compression

Permabit just made a move that has the potential to be very interesting. It has taken its Alberio deduplication and compression abilities and packaged them into an appliance so as to be able to - essentially - retrofit data reduction capabilities to installed FC SANs. Of course data reduction is nothing new per se; various efforts have been made by the “mainstream” vendors for their “mainline” products over the last few years but without a huge success….although maybe because such efforts have not been wholeheartedly embraced at the sales tip of the spear, given that they almost certainly lead to lower capacity sales. However, as the saying goes, the times they are a’changing: Data reduction is cool, embraced and promoted by the newer vendors, and – with the cat thus firmly out of the bag – there is an undercurrent of pressure for it to be more widely available. There are few if any realistic reasons to not use it….well, except, ahem, for the fact that it ain’t available on most of the common products in use (and indeed still being sold) today.

Thus the new SANblox move by Permabit is intriguing - not to mention potentially lucrative - for a number of reasons:

  • Dedupe has for the most part been, until now, a key ingredient in the special sauce by which the all-flash vendors (especially) get to say that they can get their product cost down to a level similar to spinning drives; if existing FC SANs can now easily, and at low risk, add that same function then the price delta could be expanded again.
  • There’s a likely performance boost as much as the $$ motivation. And, also, of course the function will still work as and when users upgrade to other newer devices from their favorite vendors that have (oh yeah….) the Permabit software included.
  • Permabit is a proven piece of software- also it has made installation really easy while also providing HA via synchronous writes to ensure data safety. Its own testing shows data reductions typically run in the 4-6X range.....in other words, for many workloads you might only need 15-25% of the storage space you thought you needed. That’s no small improvement when you look at the cost of storage systems!
  • It has the ability to be a "pull" technology....as users get to know it can be done they might well exert pressure on their vendors to support it. Key products from major vendors such as EMC, NetApp, Dell, and Hitachi have already been qualified…..one cannot imagine that such traditional vendors are all 100% thrilled at the prospect of such pixie dust being sprinkled on their systems, but –equally – their pragmatism and desire for account retention could conceivably actually drive them to desire to sell less capacity!!
  • Why would vendors do that? Well....
    • they need more efficiency tools to stem and manage general storage growth; indeed getting more back-end efficiency might not translate to less revenue as users are likely to continue to spend the same budgets but be able to do more for those budgets. As always, there’s plenty of actual and nascent capacity growth to go around.
    • it's a bit like things such as vVols from VMware. As a “traditional” vendor you might not like it but you have to be seen to be a part of the contemporary world.
    • increasingly vendors are making - and going to make – more of their money from software and so squeezing more capacity out of the back end HDDs isn't as painful for them as it once might have been.
Read More

HDS bought Sepaton ... now what?

Posted: September 16, 2014   /   By: Jason Buffington   /   Tags: Storage, IT Infrastructure, Data Protection, JBuff, Information and Risk Management, HDS, Sepaton, deduplication

Have you ever known two people that seemed to tell the same stories and have the same ideas, but just weren’t that into each other? And then one day, BAM, they are besties.

Sepaton was (and is) a deduplication appliance vendor that has always marketed to “the largest of enterprises.” From Sepaton’s perspective, the deduplication market might be segmented into three categories:

  • Small deduplication vendors and software-based deduplication … for midsized companies.
  • Full product-line deduplication vendors, offering a variety of in-line deduplication, single-controller scale-up (but not always with scale-out) appliances from companies that typically produce a wide variety of other IT appliances and solution components … for midsized to large organizations.
  • Sepaton, offering enterprise deduplication efficiency and performance to truly enterprise-scale organizations, particularly when those organizations have outgrown the commodity approach to dedupe.
Read More

EMC’s VNX2 Is Flash Optimized – Both The Product and The Launch (Blog Includes Video)

Posted: September 06, 2013   /   By: Mark Peters   /   Tags: Storage, EMC, IT Infrastructure, Mark Peters, deduplication

After much anticipation, EMC rolled out its next generation VNX (together with a bevy of other announcements) in a splashy live, and live-streamed, event this week. Surrounded by F1 cars, noise, and paraphernalia (not to mention a ‘special edition’ Lotus version of the product!?) in a film studio in Milan, the link to the event theme of “Speed To Lead” was pretty hard to miss. Of course it’s fun to look for the amusing (when audience members started drifting to get lunch I felt a rename to “Speed To Feed” was in order!), but there was plenty of hardcore and important news in all the glitz.

So, first off let’s hit the basics of the new ‘midrange’ VNX – to use an old adage it’s pretty simple: it does more for less. In fact rather a staggering lot more – it can exceed 1M IOPS, and reach 30GB/sec throughput, and scale to 6PB. While of course mileages may vary, EMC’s summary is to talk about it being 70% faster while doing 4X the workload of the prior model….and generally you don’t have to spend more than before to get that. The ‘oomph’ comes from a new ‘flash optimized’ architecture that benefits from 43 filed/granted patents, and includes such things as MCx (multicore optimization), both SLC (cache) and MLC (tier) flash, a Virtual Data Mover migration tool, and (on the block side) dedupe and active/active system protection.

Read More

Windows Server 2012 - It's a No-brainer

Posted: January 22, 2013   /   By: Mike Leone   /   Tags: Storage, IT Infrastructure, Data Protection, Networking, deduplication, ESG Lab, data center networking

As part of my professional new year's resolution, I plan on blogging...a lot. I'll be blogging about anything I can get my hands on. First on the list is Windows Server 2012. I recently completed my first phase of Server 2012 testing focused primarily on the new and improved storage and networking features. More specifically, I played with Storage Spaces, the Server Message Block (SMB) 3.0 protocol, Deduplication (yes, it's part of the OS now), Chkdsk, and Offloaded Data Transfer (ODX).

Read More

HP announces StoreOnce Catalyst for better deduplication

Posted: June 08, 2012   /   By: Jason Buffington   /   Tags: Data Protection, HP, deduplication, backup-to-disk

This week at HP Discover 2012, HP announced StoreOnce Catalyst as a software accelerator (API toolset) that enables its HP StoreOnce backup appliances to achieve up to 100TB/hour in backups – and in restores, too! Along with support for HP’s own Data Protector 7 (also announced at Discover), the StoreOnce family supports Symantec NetBackup and will add Backup Exec within a few months.

Touting the next generation of deduplication methodology, and a unified deduplication methodology that enables data to be deduplicated at the source, the backup server or the storage – and then stay that way throughout the data protection infrastructure.

Read More

Posts by Topic

see all