Co-Author(s): Alex Arcilla
This ESG Technical Validation report documents testing of Hitachi Virtual Storage Platform (VSP) F900, with a focus on real-world workload performance, workflow simplification through automation, and business continuity leveraging Hitachi Infrastructure Analytics Advisor (HIAA) and the VSP’s global-active device feature, sometimes called GAD.
Business expectations are higher than ever, and IT is being challenged to deliver continuously available, highly responsive apps and services, and do it increasingly quickly. ESG’s annual IT spending intentions survey reveals that IT complexity is a major issue, with 68% of organizations reporting that their IT environment is either more complex (47%), or significantly more complex (21%) than it was two years ago.1 It’s no surprise that organizations are modernizing their data centers to simplify and streamline their operations. In the same survey, respondents were asked to identify the most significant data center modernization areas of investment. The three most cited areas were increasing use of server virtualization, improving backup and recovery, and IT infrastructure orchestration and automation tools (see Figure 1).
Enterprise application environments have become increasingly unpredictable as their underlying IT infrastructure grows in complexity, size, and criticality to the business. Mission-critical business application performance is highly sensitive to storage performance and latency, and highly dependent on the resilience of the enterprise IT environment. IT needs to maximize the value of their physical and virtual infrastructures to support the wide variety and number of client devices; “always-on” expectations for IT services; workforce mobilization; regulatory compliance mandates; tightening security requirements; and corporate demands.
Flash storage offers organizations the ability to simplify their IT infrastructures by consolidating workloads with different performance requirements. Active-active replication of storage can provide continuous availability and transparent failover between data centers within campus and metro areas, enabling two sites to store and service the same data. But they are typically complex and expensive; organizations today are looking for simple storage solutions that virtualization and general IT administrators can handle.
Hitachi Virtual Storage Platform (VSP) F Series
The Virtual Storage Platform F series is Hitachi Vantara’s all-flash storage product line. In early 2018, Hitachi Vantara released a new generation of models that achieved increased IOPS and lower latency, greater scalability in terms of usable capacity, and support for containerized workloads. To maximize customers’ ROI, Hitachi Vantara has extended its 100% data availability guarantee to all VSP models.
Additionally, Hitachi Vantara has released the Hitachi Infrastructure Analytics Advisor, designed to enable customers in automating data center operations. Hitachi Infrastructure Analytics Advisor is designed to provide better predictive analytics and insights into daily data center operations. Leveraging these analytics can enable organizations to improve their forecasting of future resource requirements, resulting in better planning and budgeting. Hitachi Infrastructure Analytics Advisor analyzes telemetry data that is collected via Hitachi Data Center Analytics (HDCA). Hitachi Infrastructure Analytics Advisor also works with the Hitachi Automation Director (HAD) to provide management automation via programmable workflows, thus simplifying repetitive management tasks—zoning of Brocade switches, for example.
Hitachi Vantara has also released enhancements to the VSP’s global-active device feature. Global-active device supports active-active processing of shared data; all interfaces to the storage system are always active, and the system synchronizes writes across the participating storage systems. Today, setting up global-active device in a customer environment employs the use of Hitachi Data Instance Director (HDID), a copy data management platform that enables organizations to create business-focused policies that govern automated workflows.
Brocade G620 Fibre Channel Switch
The Brocade G620 Switch with Gen 6 Fibre Channel from Broadcom Inc. is designed to provide consistent, predictable performance to satisfy increasingly challenging business demands. Brocade Fibre Channel is purpose-built for mission-critical storage environments like Oracle. As flash performance continues to increase, Gen 6 network performance is becoming a must-have to fully realize customers' flash storage investments. Brocade has integrated zoning into Hitachi Automation Director to enable organizations to deploy and scale up applications quickly, enhancing agility. Brocade Fabric Vision technology provides a hardware and software management solution that helps simplify monitoring, maximize network availability, and reduce costs. Featuring monitoring, management, and diagnostic capabilities, Fabric Vision technology enables administrators to avoid problems before they impact operations, helping their organizations meet SLAs, supporting and aligning with Hitachi’s analytics approach.
ESG Technical Validation
ESG performed hands-on testing of Hitachi’s VSP F900. We designed the testing to validate Oracle workload performance on a VSP F900 all-flash array. We also examined the setup of Hitachi’s global-active device pair via Hitachi Data Instance Director to protect Oracle data at a secondary site. Finally, we validated how the Hitachi Infrastructure Analytics Advisor can help organizations better resolve and prevent issues affecting storage performance. The test bed is shown in Figure 2.
One VSP F900, containing 64 1.9TB SSDs, was connected to the SAN using 16 32GFC ports via two Brocade G620 Fibre Channel switches. Four DS220 models of Hitachi Advanced Server with dual Intel Xeon Gold 6140 processors and 768 GB of RAM were installed. Each server was connected to the redundant switches via a pair of Emulex dual-port host bus adapters. An Oracle RAC instance was running on each DS220 server. Two DS120 models of Hitachi Advanced Server were used for management. Hitachi Vantara offers this as a validated converged solution for Oracle Databases within our converged infrastructure Hitachi Unified Compute Platform CI portfolio..
Oracle Workload Performance
To test Oracle workload performance, an 8TB Oracle 12c RAC database was deployed on the four DS220 servers. We elected to use the peakmarks benchmark (R9.2) to measure Oracle workload performance. Peakmarks measures performance using Oracle RAC and tests that focus on various aspects of the database, server, and storage to assess Oracle workload performance holistically and at the component level. We set up one Oracle instance to run peakmarks. For each Oracle RAC instance, peakmarks is set up automatically to utilize a maximum of 75% of available database capacity.
ESG Lab Testing
First, we compared Hitachi’s VSP F900 baseline IOPS and throughput performance against previously conducted testing2 of VSP F800 using a similarly configured Oracle RAC cluster and peakmarks’ storage system performance tests.
Figure 3 shows the results of the STO-RR test, configured for 100% 8KB random reads. VSP F900 was able to support more than 1.4 million IOPS, 2.68x the IOPS of VSP F800, with an almost identical average response time of just 1ms.
Next, we compared random write performance using the STO-RWF test configured for 100% 32KB random writes. As seen in Figure 4, the VSP F900 was able to sustain nearly 12,000MB/sec.
Finally, we looked at Oracle OLTP performance using the peakmarks DBX-U25 test, which simulates multiple users performing database updates that modify 25 rows per transaction. This is a particularly challenging workload for a storage system because it consists of more than 50% random writes.
As Figure 5 shows, VSP F900 was able to process 8,936 transactions per second, nearly double the volume of transactions as VSP F800.
Why This Matters
Delivering consistent application performance is one of the most challenging aspects of complex production environments, especially for databases and data-heavy applications. High performance OLTP and data analytics are no longer esoteric requirements for a niche market. With 69% of respondents to ESG research having already deployed or planning to deploy the technology,3 solid-state is decidedly mainstream. While many of its applications are exotic—genomics, high performance computing, and weather modeling—data analytics are needed for tasks such as keeping up with customers’ online transactions, handling seasonal workload increases, and simply enabling organizations to outmaneuver their competition.
ESG Lab validated the performance of the Hitachi Virtual Storage Platform F series running OLTP and analytics workloads against an 8TB data set in an Oracle RAC environment and found that while the VSP F800 provided outstanding performance for an all-flash system, VSP F900 took performance to the next level, providing more than double the IOPS and nearly double the transactions per second at identical response times.
Organizations cannot underestimate the importance of data backup and recovery, especially when dealing with business-critical database workloads. Hitachi’s global-active device feature ensures that data center site failover is transparent while eliminating the need for reconfiguration, specifically when unexpected failures or planned updates occur. Using global-active device, organizations can provide continuous data access to end-users, thus minimizing application downtime.
Note: While our testing used two systems in the same data center to simulate two separate sites, Brocade offers a full extension solutions portfolio designed to give organizations flexible deployment options for replication and support of global-active device. As of this writing, ESG and Hitachi Vantara are in discussions regarding possible validation of Brocade’s extension solutions to support global-active device over distance. For more information, see the link in the footnote.4
ESG Lab Testing
ESG examined how to set up global-active device between two VSP F900 nodes using Hitachi Data Instance Director, Hitachi’s copy data management platform. The two VSP F900 nodes represented two data centers, designated as the primary and secondary site. On the four DS240 servers, we created an Oracle RAC stretch cluster. Four FC paths had been created from each VSP F900 and sent to each DS240 server. Two replication links had been created between the VSP F900 systems. To facilitate site replication, dynamic provisioning pools of virtual volumes (VVols) had also been set up on the primary VSP F900. We replicated the Oracle workloads using the VVols.
We first navigated to the Hitachi Data Instance Director to set up both VSP F900s as storage nodes. By clicking on Nodes, the Hitachi Data Instance Director guided us through a short setup process. After creating the nodes in Hitachi Data Instance Director, we proceeded to create the policy that will set up the global-active device replication between the two VSP F900s. We clicked on Policies in the Hitachi Data Instance Director main menu and clicked on the + icon to add a new policy (see Figure 6).
After entering the name HDID-GAD-Policy into the designated field, the Hitachi Data Instance Director prompted us to classify the policy, or specify the storage components on which the policy would be activated (e.g., on a physical device such as Hitachi block storage). A policy could also be classified for applications or hypervisors within a specific Hitachi storage system. Next, we edited this policy by entering the logical device names representing the six Oracle workloads to be replicated in the Logical Devices field. These logical devices corresponded to the five VVols associated with the Oracle workloads. After entering the logical devices manually, we defined the attributes of the replication operation, such as when the policy will be executed. For this example, we chose to run the policy when it was manually triggered.
Next, we created the data flow associated with the global-active device replication, as shown in Figure 7. The data flow designates the source of the data to be replicated and the target on which the data is replicated. We clicked on Data Flows from the Hitachi Data Instance Director main menu. To define the source and target, we clicked once on Site 1-F900 and Site 2-F900, respectively, under the Nodes tab and dragged them to the workspace so that their icons appeared. Automatically, the Hitachi Data Instance Director defined Site 1-F900 as the source since the arrow that appears points toward Site 2-F900.
To create the global-active device pair, ESG selected the icon Site 2-F900, and its associated policy appeared on the right side of the screen. We then selected HDID-GAD-Policy to configure Site 2-F900 for global-active device replication. The window, named GAD-Replicate configuration on ‘Site 2-F900’, appeared (shown in Figure 7). The administrator has five options for replication; we chose the Active-Active Remote Clone option since we are using global-active device. We also chose other options necessary for global-active device replication such as the target pool in which Site 2-F900 exists and the target quorum device. After saving these choices, we proceeded to activate the policy. If no errors are detected by Hitachi Data Instance Director when compiling the policy, the data flow is saved. This was the case in our testing.
We then manually triggered the replication policy. We navigated to the views showing the contents of the primary and secondary storage via the Hitachi Device Manager and saw that the five VVols in Site 1-F900 were replicated in Site 2-F900 (as shown in Figure 8).
Providing continuous data access remains an important issue as end-users expect no downtime when accessing their Oracle applications, especially when those applications are critical to achieving business objectives. While global-active device has served organizations well in ensuring business continuity, the ability to implement global-active device via Hitachi Data Instance Director makes the process more efficient, as this workflow eliminates the need to deal with numerous configuration files and manually enter the correct information, such as filenames and volume IDs.
Why This Matters
Providing continuous data access to end-users is essential for business continuity. Using simple tools that facilitate the creation of policy-based copy data management can ultimately help organizations to achieve effective data backup and recovery.
ESG validated that implementing data protection using global-active device can help organizations to create and deploy policies for effective copy data management across two data centers. We also noted that the use of Hitachi Data Instance Director to create a global-active device pair makes the process user-friendly, thus minimizing errors while decreasing the time needed to ensure proper backup.
Deep analytics, powered by artificial intelligence and machine learning, has provided organizations with the opportunity to dramatically reduce problem resolution time while proactively identifying and addressing performance-impacting issues before they occur. Hitachi Infrastructure Analytics Advisor can arm organizations with the necessary information to improve uptime.
Brocade Fabric Vision provides another layer of in-depth network analytics designed to provide network visibility and actionable insight that IT administrators can use to ensure operational consistency and stability. Brocade IO Insight non-disruptively gathers and analyzes I/O statistics on any device port. To accomplish this, IO Insight monitors every frame on every port. Monitoring policies can be set to proactively report on and maintain SLAs and enable storage performance troubleshooting and optimization.
ESG Lab Testing
ESG explored how the administrator can interact with the Hitachi Infrastructure Analytics Advisor. We first opened the dashboard (as shown in Figure 9). We noted that an administrator can change panels to display different metrics and charts. Also, the administrator can customize reports for the dashboard by leveraging existing queries.
Next, we clicked on the red triangle located on the right side of the dashboard in Figure 9. The Hitachi Infrastructure Analytics Advisor brought us to the screen shown in Figure 10.
This end-to-end view revealed how the issue may be related to other components in the storage infrastructure at multiple levels (port, processor, cache, storage pool, or parity group). We also clicked on the identified issue (marked by the red triangle) and revealed performance detail. As shown in Figure 10, an administrator can see that the metric (KB/sec) has crossed the predefined threshold represented by the red line.
Figure 11 shows the additional investigation that can be accomplished via Hitachi Infrastructure Analytics Advisor. ESG drilled down to the specific performance issue identified and investigated if the performance of other parts of the overall storage system may have contributed to the issue by clicking on the drop-down menu to the right of the top screen to see if other metrics were flagged red.
We also viewed how the Hitachi Infrastructure Analytics Advisor can assist in future resource planning and budgeting. We selected multiple metrics tracked by Hitachi Infrastructure Analytics Advisor to see how they would perform over a forecasted period. In the screen located in the bottom right corner of Figure 11, we see that the Hitachi Infrastructure Analytics Advisor forecasts that the top metric will cross the predefined threshold, as indicated by the red line. This can signal to an administrator to investigate the current storage infrastructure and determine if additional storage resources need to be purchased to prevent such an issue from ultimately impacting application performance.
Why This Matters
Organizations are focused on ensuring acceptable application performance so that end-users can complete critical business tasks. Identifying and resolving issues quickly is only one way to decrease overall downtime. Having the ability to proactively identify issues that can impact application performance can further decrease downtime and contribute to business continuity.
ESG validated that Hitachi Infrastructure Analytics Advisor can both help organizations to identify the causes of current performance-impacting issues and anticipate what needs to be done to prevent them from arising in the first place. We saw how an administrator can leverage the Hitachi Infrastructure Analytics Advisor to see how an identified issue fits into the entire storage infrastructure while pinpointing other possible affected areas. We also verified the ease with which an administrator can see how an identified issue potentially impacts other areas so that any impact to business continuity is minimized. We finally also saw how forecasting the behavior of metrics against predefined thresholds can help the administrator plan and budget accordingly for future resources requirements.
The Bigger Truth
As enterprise IT infrastructure becomes more virtualized and grows in complexity, size, and criticality to the business, application environments have become increasingly unpredictable. This presents a challenge in supporting high performance and latency-sensitive mission-critical business applications, which also require extremely high resilience. IT needs to maximize the value of their physical and virtual infrastructures, manage “always-on” expectations for IT services, enable workforce mobilization, and satisfy regulatory compliance mandates and tightening security requirements.
An ESG survey found that increasing use of server virtualization is the most cited area of data center modernization investment, with improving backup and recovery and IT infrastructure orchestration and automation tools rounding out the top three.5
ESG validated that the VSP F900 takes OLTP performance to the next level, providing more than double the IOPS and nearly double the transactions per second at identical response times of the previous generation, using standard SSDs. ESG found that implementing data availability using global-active device can help organizations to create and deploy policies for effective copy data management across two data centers and Hitachi Data Instance Director makes the process user-friendly, minimizing errors while decreasing the time needed to ensure proper backup. ESG validated that the Hitachi Infrastructure Analytics Advisor can help organizations to identify the causes of current performance issues, anticipate future challenges, and take recommended actions. We also verified how an administrator can see how an identified issue potentially impacts other areas to minimize impact to business continuity.
Today’s storage environment is rapidly increasing in size and complexity, while end-users have ever-higher expectations of instant access and around-the-clock availability. From the speed and automated operation of the Hitachi Infrastructure Analytics Advisor to the dramatic business continuity capabilities of Hitachi’s global-active device feature, the breadth and depth of the features offered by Hitachi can be used to meet the precise needs of any organization. The VSP F900 combined with Hitachi’s suite of supporting solutions offers easy-to-manage scalability and reduced operational costs—powerful and worthy of serious consideration by any IT organization being asked, once again, to do “more with less” in its data center.
1. Source: ESG Research Report, 2018 IT Spending Intentions Survey, February 2018.↩
2. https://www.hitachivantara.com/en-us/pdf/architecture-guide/hitachi-ucp-6000-for-oracle-rac-using-vsp-f800-haf-cb-2500.pdf ↩
3. Source: ESG Brief, Flash Storage: Growth, Acceptance, and the Rise of NVMe, September 2017.↩
5. Source: ESG Research Report, 2018 IT Spending Intentions Survey, February 2018.↩
ESG Validation Reports
The goal of ESG Validation reports is to educate IT professionals about information technology solutions for companies of all types and sizes. ESG Validation reports are not meant to replace the evaluation process that should be conducted before making purchasing decisions, but rather to provide insight into these emerging technologies. Our objectives are to explore some of the more valuable features and functions of IT solutions, show how they can be used to solve real customer problems, and identify any areas needing improvement. The ESG Validation Team’s expert third-party perspective is based on our own hands-on testing as well as on interviews with customers who use these products in production environments.