Today, DataDirect Networks (DDN) announced a partnership with NVIDIA to deliver infrastructure designed for artificial intelligence (AI) and machine learning workloads. The solution combines the DDN A3I with the NVIDIA DGX-1 and is preconfigured to eliminate the guesswork associated with infrastructure deployment and configuration. Additionally, DDN claims the system offers both the high performance and the massive capacity scale required for the intense demands of training AI algorithms. According to DDN, the solution delivers 200 PB (yep, that says petabytes) of capacity in a single namespace, and more than 1.4TB/s of performance.
What does this all this mean?
- AI is real, and it is here. DDN is not the first IT vendor to announce a partnership with NVIDIA delivering infrastructure for AI workloads, and will likely not be the last. Given the advancement of the tools, and the supporting infrastructure, AI-based workloads are quickly moving past the emergent/niche-based workload category to becoming a widespread common business practice. Any business today should not be asking, “Why should we do AI?”--rather, the question should be, “Why not?”
- Infrastructure is no longer an excuse with AI. In my recent TechTarget article, I discussed the rise of AI and how infrastructure has been holding firms back, with 44% of organizations with active AI and machine learning projects saying IT infrastructure in the form of cost or lack of capabilities was one of their top three challenges. Multiple IT vendors (like DDN) and public cloud vendors have recognized this phenomenon as well and have taken steps to resolve infrastructure-related challenges and complexity. In other words, the hardware, storage or otherwise, should no longer be a hurdle. No more excuses.
- Focus on maximizing the value of GPUs. For any MBA grads out there that were required to read the Goldratt books, “The Goal” or “The Critical Chain,” you know that every process or system has a bottleneck (a resource utilized at or near 100% that limits the rest of the system), and that systems should be designed so that the resource utilized at 100% is the most valuable/expensive element in the chain. From everything I have seen, the GPU is the infrastructure element I want to maximize for an AI-centric infrastructure. I would design everything else so that those NVIDIA systems were utilized as much as possible and from what I understand, that was a goal of DDN’s solution design as well.
- Also, if you haven’t read those two books, congratulations, I just saved you from reading over 600 pages. Seriously, though, they are decent reads relative to other business books, so check them out if you have the time.
With so many vendors targeting the AI space, it is still anyone’s game. This is a space worth watching over the next few years.
Additionally, this news also follows closely on the heels of DDN’s acquisition of Tintri, a storage vendor that offered a number of strong innovations as well as impressive usability. It will be interesting to watch how the integration proceeds, but I suspect good things.