HPE Alletra 9000 – Primera Evolved

I’m excited to announce that the evolution of the HPE Primera (which was the evolution of 3PAR) is now available.

It’s called the HPE Alletra 9000 and is the mission-critical Tier-0 complement to the Tier-1 Alletra 6000 (which in turn is the evolution of Nimble).

It retains the rich feature set of Primera and the 100% uptime guarantee. The main enhancement vs Primera is the increased speeds, and the fact that all the performance is possible in just a 4U configuration, making it the most performance-dense full-feature Tier 0 system in the world (by far). It is managed via the HPE Data Services Cloud Console.

A welcome enhancement (that is also coming to Primera) is that of Active Peer Persistence, which allows a LUN to be simultaneously read from and written to from two sites synchronously replicating. This means that each site can do local writes to a sync replicated LUN without the hosts needing to cross the network to the other site.

Optimized Architecture

The Alletra 9000 builds on the Primera architecture. This means there are multiple parallelized ASICs per controller helping out the CPUs with various aspects of I/O handling.

The main difference is how the internal PCI architecture is laid out, and how PCI switches are used. In addition, all media is now using the NVMe protocol.

These optimizations have enabled a sizable performance increase in real-world workloads.

Predictable & Consistent Experience for Mission Critical Workloads

The Alletra 9000 is a high end, Tier-0 array. This means not only that it needs to be fast, but also that the performance needs to be predictable and consistent, even when faced with unpredictable, conflicting workloads.

It’s not just about high IOPS and throughput in a single workload – latency is a key factor, especially with multiple mission-critical applications that simultaneously demand a great result.

The vast majority of I/O happens well within 250 microsecond latency, for instance.

Intelligence is the other important factor. Can the array automatically determine what workloads to auto-prioritize? Can it automatically deal with many conflicting workloads, each having wildly different I/O characteristics?

You see, it’s easy to make a system that will work well at a fixed block size and simple operations like you’d see in a typical performance benchmark.

It’s a different matter altogether to make a system that can host hundreds of conflicting applications, each with extremely different I/O characteristics.

Performance Improvements – Most Performance-Dense Tier-0 System

Just like with the Alletra 6000 blog, I will use the SAP HANA certification numbers to show real-world differences between different arrays.

I like it since all major vendors participate and it’s not easy to “game” the benchmark. It’s also easy to see who’s faster. The more HANA nodes, the faster it is, no need to have a degree to interpret the results 🙂

Before I show the results, some notable things to be aware of:

  • All this performance is achieved within just 4U of space, which means rack space savings for customers
  • The Alletra 9000 is a Tier-0 system for mission-critical workloads and offers a 100% uptime guarantee (most other vendors have no such SLA)
  • The vast majority of I/O happens within 250 microseconds
  • Alletra 9000 has sophisticated features meant for mission-critical workloads – things like Active Peer Persistence, complex replication topologies, Port Persistence and more
  • Alletra 9000 is assisted by both InfoSight and Data Services Cloud Console, making it both easy to consume and able to lower customer risk.

Anyway, on to the numbers! All current as of May 18th, 2022 (updated a year after publication to reflect an increase from 96 nodes to 120 on the HPE A9080 after some code enhancements, and update the other vendors as well).

Some of you may think I’m cherry-picking for marketing purposes (plus the systems above aren’t all in the same Tier-0 class) so allow me to explain.

The point I’m trying to make is the sheer amount of performance doable in just 4U. With drive densities being what they are these days, it’s pretty easy to configure enough capacity for most customers in just the base 4U system, making it a very attractive option for saving on rack space.

The physically smallest possible HDS 5500 shown for comparison would need 18U to achieve 74 nodes – so, the Alletra 9000 can do 30% more speed in 4.5 times less rack space. One could do an Alletra 9000, 2 switches and 12 servers in the same exact amount of rack space 🙂

The HDS can go faster with a lot more drives and controllers, and you’d need more than a full rack to realize that speed: 222 HANA nodes possible with over a full rack of stuff. An Alletra 6000 4-way cluster – in just 16U – would do 216 nodes, but arguably it’s not the same class of system when it comes to replication options.

A PowerMax 8000 2-Brick (4 controllers) needs 22U and only does 54 nodes. A 3-brick system (6 controllers) can do 80 nodes and takes almost a whole rack (32U). So even with more controllers, a PowerMax needs 8x more rack space to provide less performance than an Alletra 9000! A maxed-out PowerMax 8000 can do 210 nodes but needs two full racks and massive power consumption – to achieve about 2x the speed of a 4U Alletra 9000… 🙂 (EMC HANA numbers in more detail here, page 19, and PowerMax hardware details here – page 49). So much for all the hero numbers we’ve seen for that system.

Something like an IBM DS8950 is also not 4U, but rather much bigger physically and can only do half the performance. An IBM 9500 can scale out to 4 I/O groups (each I/O group is a pair of controllers) and like that it could do 164 nodes… but a single 9500 appliance won’t do more than 41.

Same goes with NetApp, you’d need more than 2x A900 to hit the same node count as the Alletra 9000, which means the A900 has much less performance density than an Alletra 9000 (plus the A900 is not truly active-active in the sense that a single volume can only be served by a single controller, whereas in the Alletra 9000 all 4 controllers serve all volumes simultaneously).

As you can see, the Alletra 9000 performance density is really something quite special, especially given the high end class of the system.

Links to the results: Alletra 9000, Alletra 6000, HDS 5000, IBM DS8000, IBM 9200, Pure, PowerMax, NetApp.

Summary

The Alletra 9000 is the perfect complement to the Alletra 6000, and is designed to tackle complex mission-critical workloads with ease while providing the highest performance density of any Tier-0 platform.

It’s differentiated from the Alletra 6000 with its 100% uptime guarantee, higher performance and advanced replication features such as Active Peer Persistence that are useful for mission-critical workloads.

Just like with the Alletra 6000, it’s not about the box, it’s about the whole customer experience. With the help of InfoSight and Data Services Cloud Console, it is able to take ease of consumption, data services, automation and orchestration to the next level.

D

4 Replies to “HPE Alletra 9000 – Primera Evolved”

  1. Why is there no comparision to Huawei Dorado V6 on hana nodes? It released 2 years ago and on the SAP Page it looks like the 6000, 8000 and 18000 offer equal or way more nodes then Alletra.

    1. The purpose is to compare to the more common (and higher end) systems. Also, to show how many rack units are needed to accomplish the result.

      The Alletra 9080 does more SAP HANA nodes in 4U than any other array in the world.

      Are there systems faster in absolute terms? Sure, as mentioned in the article, two full racks’ worth of gear can result in something faster than a 4U Alletra 9080 🙂

  2. It’s convenient that you ignored the competitive models which are more inline with the low performance of an Alletra?
    In a per 2 RU comparison…

    A Dell PowerStore 7000 can support 14 HANA nodes per 2RU
    The PS 7000 (based on the required minimum H/W) can delivery only 180K IOPS per RU (Hero#s)

    A HDS VSP 790 can support 34 HANA Nodes per 2RU
    HDS VSP 790 delivers 3.44M IOPS per RU

    HPE Primera A650 supports 30 HANA Nodes per 2RU
    HPE Primera A650 supports 375K IOPS per RU

    A Pure /X70 supports 25 HANA nodes per 2RU

    1. What IOPS are you talking about? The entire point of this analysis is to focus on certified HANA node count performance, not marketing numbers.

Comments are closed.