HighPoint and Xinnor Unite to Deliver Gen5
Enterprise-Class RAID 5 & 6 Storage
Performance Excellence
HighPoint and Xinnor Unite to Deliver Gen5 Enterprise-Class RAID 5 & 6 Storage Performance Excellence
The growing demands of professional media workflows and data-intensive applications require storage solutions that can deliver both exceptional performance and reliability. This technical analysis explores the integration of HighPoint's Rocket 1628A NVMe Switch Adapter with the xiRAID RAID engine in a professional workstation with relatively few compute resources. By combining these technologies, we aim to demonstrate how organizations can achieve enterprise level storage performance using commercially available components, without the need for specialized hardware or extensive system optimization. The tests conducted focus on real-world performance metrics across various workloads.
Test Environment
In this study, we conducted tests to explore various scenarios, selecting configurations to evaluate performance across different use cases. This included two small RAID5 setups, each using four Micron 7450/7400 PRO drives (storage set 1), and a RAID6 setup with 8 KIOXIA KCD8DPUG30T7 drives (storage set 2).
Hardware configuration:
-
Motherboard: ProArt X670E-CREATOR WIFI
-
CPU: AMD Ryzen 9 7950X (16 cores, 32 threads)
-
RAM: 64GiB System Memory (4 x 16GiB)
-
Switch Adapter: HighPoint Rocket 1628A Switch Adapter
-
Storage set 1:
-
4 x Micron 7450 PRO NVMe SSD (7680 GB each) - Linux NVMe devices 0-3
-
4 x Micron 7400 PRO NVMe SSD (7680 GB each) - Linux NVMe devices 4-7
-
Storage set 2: 8 x KIOXIA KCD8DPUG30T7 NVMe SSD (30,720 GB each)
Software configuration:
-
Operating System: Red Hat 8.10
-
RAID Software: xiRAID Classic v4.1.0
System optimizations:
-
BIOS Settings:
-
Global C-state Control: Disabled
-
Core Performance Boost: Enabled
-
System Profile: tuned-adm profile throughput-performance
Raw Drive Performance Testing
Prior to testing, all drives underwent a double overwrite process using a 128k block (workload independent precondition), followed by sequential read and write tests. Subsequently, the drives were overwritten using a 4k block before conducting random read and write tests.
The performance results closely aligned with the manufacturer's specifications, as evidenced by the tables below.
Raw drives performance for Micron 7450/7400 PRO drives:
During concurrent random read operations across all drives, we encountered a platform limitation due to CPU constraints, achieving 7,345k IOPS instead of the expected 8,000k IOPS.
NVMe drives with high random read IOPS generate frequent interrupts or require increased polling, which raises CPU utilization. As the drive approaches its maximum IOPS capacity, CPU usage can rise disproportionately due to context switching, interrupt handling, or contention in the IO subsystem.
As a result, in this hardware configuration, we encountered CPU limitations even at the raw drives level. Consequently, we decided to skip this test for another NVMe drive set (Kioxia KCD8DPUG30T7), as the same CPU bottleneck would likely occur.
CPU consumption with random read to all NVMEs
CPU utilization of FIO threads was monitored during the most resource-intensive operations: random read and random write tests.
CPU consumption in the context of FIO threads for 8 drives
Raw drives performance for KIOXIA KCD8DPUG30T7 drives:
xiRAID Performance Testing
RAID5
Given the platform s configuration of 4 Micron 7450 and 4 Micron 7400 drives, we established two RAID 5 arrays designated as media1 (for Micron 7450) and media2 (for Micron 7400), and conducted tests on each array individually and simultaneously.
Throughout the testing process, we collected CPU utilization statistics for all xiRAID threads using a custom bash script. This script leverages the top command to filter and aggregate CPU usage for processes with a COMMAND beginning with xi, then normalizes the sum by the number of CPU cores. The FIO configuration files and statistics collection script listings are included in the appendix.
xiRAID configurations
Combined RAID5 Performance (media1 + media2)
Baseline performance calculation for 2 arrays:
-
Sequential Read: 50.1 GB/s (100% of raw performance)
-
Sequential Write: 30.9GB/s (75% of raw performance: 41.2 GB/s x 75% = 30.9GB/s)
-
Random Read: 7,345K IOPS (100% of raw performance)
-
Random Write: 819K IOPS (Performance test of 8 NVMe drives under simultaneous load with a 50/50 r/w pattern demonstrated 3,278K IOPS (Read: 1,639K IOPS, Write: 1,639K IOPS). With a RAID5 write penalty factor of 4, the expected performance for 2 RAID5 arrays is calculated as: 3,278/4 = 819K IOPS)
Test Results:
* requires further investigation due to encountered CPU constraints on the test platform
Individual RAID5 Performance on 4 Micron 7450 Drives (media1)
Baseline performance calculation for the media1 RAID array:
-
Sequential Read: 25.1 GB/s (100% of raw performance)
-
Sequential Write: 17.1GB/s (75% of raw performance: 22.9 GB/s x 75% = 17.1GB/s)
-
Random Read: 4,010K IOPS (100% of raw performance)
-
Random Write: 419K IOPS (Performance test of 4 NVMe drives under simultaneous load with a 50/50 r/w pattern demonstrated 1,678K IOPS (Read: 839K IOPS, Write: 839K IOPS). With a RAID5 write penalty factor of 4, the expected performance for this RAID5 array is calculated as: 1,678/4 = 419K IOPS)
Test Results:
Individual RAID5 Performance on 4 Micron 7400 Drives (media2)
Baseline performance calculation for the media2 RAID array:
-
Sequential Read: 25.0 GB/s (100% of raw performance)
-
Sequential Write: 14.1 GB/s (75% of raw performance: 18.9 GB/s x 75% = 14.1 GB/s)
-
Random Read: 4,024K IOPS (100% of raw performance)
-
Random Write: 401K IOPS (Performance test of 4 NVMe drives under simultaneous load with a 50/50 r/w pattern demonstrated 1,606K IOPS (Read: 803K IOPS, Write: 803K IOPS). With a RAID5 write penalty factor of 4, the expected performance for this RAID5 array is calculated as: 1,606/4 = 401K IOPS)
Test Results:
RAID6
Using 8 KIOXIA NVMe SSD drives, we created one RAID6 array (media6).
Baseline performance calculation for the media6 RAID array:
-
Sequential Read: 52.7 GB/s (100% of raw performance)
-
Sequential Write: 30.4GB/s (75% of raw performance: 40.5 GB/s x 75% = 30.4GB/s)
-
Random Write: 430K IOPS (Performance test of 8 NVMe drives under simultaneous load with a 50/50 r/w pattern demonstrated 2584K IOPS (Read: 1290k IOPS, Write: 1294k IOPS ). With a RAID6 write penalty factor of 6, the expected performance for this RAID6 array is calculated as: 2584/6 = 430K IOPS)
Test Results:
Conclusion
The comprehensive performance analysis demonstrates that the combination of xiRAID's RAID engine and HighPoint's Rocket 1628A NVMe Switch Adapter delivers exceptional storage performance in a professional workstation environment. With sequential read efficiencies reaching 98-100% of raw backend performance and write operations achieving up to 95% efficiency, the solution proves highly capable of handling demanding workloads across diverse industries.
The RAID6 configuration achieved near-optimal sequential performance, reinforcing the solution's suitability for bandwidth-heavy applications. This configuration also demonstrated robust handling of write-intensive tasks, further expanding its applicability to professional media workflows. Similarly, the RAID5 configurations provided strong random and sequential performance, demonstrating the system's ability to support high-throughput operations.
The absence of bottlenecks in the HighPoint PCIe switch operation, coupled with consistent performance across both Micron and KIOXIA drive configurations, validates the robustness and scalability of this architecture. These results highlight that cost-effective NVMe-based systems can deliver enterprise-grade performance without requiring specialized infrastructure or significant resource overhead. This solution provides a practical and efficient approach to meeting the growing storage demands of professional workflows, offering an exceptional price-to-performance ratio for organizations in data-intensive markets.