Here's the bottom line: HDS is in a primo spot to take advantage of the internet of things. Perhaps no one I can think of outside of GE is in as good a position.
Google, with its marketing genius, just did the greatest talent recruitment exercise I've ever seen.
Today the Boston Globe published an article suggesting that the great (sarcasm) state of Massachusetts should repeal a longstanding law allowing Non-Compete agreements to be enforced (though oddly, ONLY for Tech Companies - so it's totally fine for everyone else).
It specifically called out EMC as a long-standing enforcer of these agreements.
Thus, my two cents.
In this video blog, I discuss considerations and requirements for the Data Center of Tomorrow, and how we make the leap from the Data Center of Today (well, really, Yesterday) to the Data Center of Tomorrow.
First, I can't believe that after changing the name of ESG from Enterprise Storage Group to Enterprise Strategy Group almost 13 years ago, I still got called the former in this Forbes Article.
The author quotes me somewhat incorrectly a few times, but the overall point was correct. I didn't say EMC follows the "5 step program" outlined specifically, I said all incumbent players tend to do this.
What if networks (or RAID controllers) were horizontal versus edge/core vertical? What if we didn’t even have switches? What if the server did its own switching and used a big mesh fabric to create a transient direct connected tunnel from point A to whatever point B is – another node, storage, whatever. You wouldn’t need buffering or queueing, etc. You would just open the pipe and rock and roll.
For your consideration - how I think about "software defined."
A server is a box with CPU cores, memory, flash, and storage. A storage array is a box with CPU cores, memory, flash, and storage. A network switch is a box with CPU cores, memory, flash, and storage. Thus, really, what's the difference between a white box server and storage array or a switch? Ports? Capacity? Big deal. The difference is the software function that is defined to execute on the various personalities involved. It's not the hardware anymore.
It's been a while since I've seen a new way to fail in business, hence the dearth of additions to this series. But now we have a new one - the fantasy business model.
In short, technology is nice - but assuming that you have a "better way" to do something that can already be done some "less better" way by spending money with some incumbent vendor - your road to success will be brutal and statistically rare. I'm glad you have a better mousetrap, but history is littered with the carnage (or lack thereof in this metaphor) of better mousetraps.
But, if you have a better mousetrap - or (gasp!) sometimes a technically INFERIOR mousetrap - combined with a disruptive business model you have a far better chance of upsetting the status quo - or, better yet, the money flowing from the customer to the incumbent.
It's business model changes that upset the traditional way of doing things.
In this case I don't believe Cisco can ultimately keep SDN from happening universally, but it sure as heck is going to slow down that train for a while. Eventually, if it does see the light of day as I suspect it will, Cisco's core networking boondoggle will come under heavy fire - and it will be forced to adapt its business model in that sector, or abandon it - eventually.
Check out this video blog entry to see why, rather than "software-defined everything," we need to think about data-defined infrastructure, a model in which IT designs everything from the data outwards, in order to make sure that we store, protect, and deliver it in a way that supports the business to make money or save money.
Software-defined everything. SFE. The latest craze in marketing mayhem.
There, of course, is some legitimacy to the phrase - but doesn't software already "define" everything in our IT world? Doesn't software provide the execution sets that tell our "stuff" what we want it to do? Therefore, isn't everything really already software-defined in many ways?
Long time friend and ESG'er Mike Beaudet has taken on this year's ESG Charity sponsorship efforts and is leading the charge to raise awareness and funding for a local chapter of Best Buddies (Along with Tom Brady, mind you.).
I talk about the need for both disciplines - archiving and backup - as separate, but complementary tasks in this video blog.
I was a bit surprised at the lack of interest in HP's Moonshot announcement by the big media. I suppose I understand the complexity combined with HP's less than stellar PR maneuvers over the last few years could keep some folks at bay, but this announcement has all the makings of a MASSIVE and exciting outcome.
Nothing is ever truly new in IT, and Big Data is no exception. Big Data in 2013 is SAP(ERP) twenty years ago. Or Siebel (CRM) 12 years ago. It's rolling out the exact same way.
The storage should optimize itself constantly within the constraints we give it. If it meets your demands, it should then optimize for power, or put data somewhere else, or take up less space, etc. All the "hard" tasks that are impossible for a human to do, are exactly possible for a machine to do. We just don't want to let it do them for some unknown reason. It's true outside of storage of course, but storage is awful at this. We'd want the same things to apply to application workloads - and forget about storage/server/network stuff altogether, but we need to walk before we run.
It will take a long time to live down the debacle of the last few years. Having said that, I'm happy to report that [HP] may be my new favorite company to watch and talk about. Why you ask? Because they are doing really, really smart things.