For the last few years, the processing space has been red hot. Between startups and mainstay chip vendors, it’s an ongoing arms race to address the specialized needs of modern workloads and applications in core data centers, at the edge, and in the public cloud. In fact, when ESG asked respondents to cite the aspects across the entire data pipeline that are most frequently responsible for causing delays, the top response was data processing. Organizations want speed, reliability, and cost effectiveness. And to get there, it’s forcing organizations to rethink their approach to computing, especially with the rapid adoption of AI technologies fueling the booming compute market.
Satisfying AI workload requirements is a growing challenge for many organizations. Traditional compute is simply unable to keep up with the orders of magnitude improvements organizations are looking for in their respective compute infrastructure. And it’s a losing proposition to just keep throwing more and more processing power at the problem. It’s too expensive. It’s too big of a footprint. And it’s too power hungry. We’re seeing an increase in the need for specialized compute to address the different workloads in the AI space, mainly training and inference.
Training addresses the algorithm creation process, by feeding a model data so it can learn. Inference refers to the stage where the trained model gets leveraged to make predictions based on new incoming data. Of the two, training is far more resource intensive. And while GPUs for example can address both types of workloads, the emergence of specialized compute based on the AI workload (training vs. inference) has emerged and amassed a surprising number of startups looking to add their IP and approach into the mix.
And here we are. The latest acquisition news: Intel is acquiring Habana Labs for $2 billion. Habana offers specialized compute for training and inference. They have two chips, Gaudi (training) and Goya (inference), both of which have similar value propositions: lower the cost of AI compute in both power and floor space without sacrificing performance. Actually, in Habana’s case, it’s equally about improving performance. I remember talking to Habana while they were still in stealth mode, and the performance results made me do a double take. This was a no brainer for Intel. They get a more robust AI portfolio, they can even better address customer’s specialized needs, and with them expecting to generate $3.5 billion over 2019 in AI-driven revenue, the $2 billion price tag almost seems too good to be true.
This will not be the last acquisition in the specialized AI hardware space as I think we’ll see more activity in 2020. The market is ripe for consolidation as the major chip vendors and infrastructure providers look to bolster their existing AI portfolios with purpose-built solutions that can address any business requirement.