AHEAD Approved: An Introduction to Dell EMC Isilon F800

At AHEAD, we like to think we’re keeping on top of many of the industry trends that are shaping the evolution of data center technology. Two of the most evident trends affecting data center design and infrastructure in recent years are the obvious focus on “Flash” and “All-Flash” systems, as well as the trend to develop and deploy “hyperconverged” technologies for many use cases, including general virtualization, software defined storage, and data protection. We see many instances where both trends come together for scale-out virtualization platforms based on deployment of “standardized” commodity servers with flash capacity and the most critical component – smart software.  

The use of commodity servers (Dell PowerEdge, Cisco UCS, HPE Proliant, Supermicro, etc) of a standardized or particular rack unit height as the foundation for software defined systems, analytics platforms, and NoSQL databases, is grounded in making the hardware as cheap as possible at scale. The specialized software is accountable for leveraging all that hardware into something logical, protecting the data, and ultimately, servicing end-user or application IO. Data protection is typically N-way mirroring because anything more elegant than this (erasure coding, parity schemes, etc.) is difficult to achieve without trade-offs across more loosely coupled nodes.

Commodity servers aren’t the most efficient way to help the storage industry push toward a petabyte of effective capacity per rack unit. However, custom hardware designs leveraging commodity components are now emerging and taking storage performance and density to the next level. My case in point, today: Dell EMC’s all-flash “Nitro” Isilon system, officially known as the F800.  

Dell EMC has had Isilon in their portfolio since acquiring the technology for $2B+ in 2010. Over the course of the last seven years, this acquisition has paid off in spades for Dell EMC as a massive workhorse that fits many client use cases and is one of Dell EMC’s most widely deployed storage platforms. Over the course of seven years, Dell EMC has made major software updates to the Isilon OneFS platform, has ridden the Intel processing curve, and has made moderate use of flash to accelerate system performance for caching and metadata, but in general, they have not brought significant changes to the hardware architecture.

To that point, the Isilon X410 was arguably the capacity vs. performance sweet spot in the previous generation of Isilon hardware. In that 4U chassis was compute and 36 drive slots. When looking at raw capacity density, an example configuration using 4TB hard disk drives, and holding back a couple of drive slots for metadata-accelerating flash drives, results in 136 terabytes. From a compute perspective, there are two 8-core CPUs in 4U with this configuration. Network connectivity can be dual 10GbE or 40GbE. The interconnect for this and prior generations was InfiniBand.

What’s new?

With the launch of Isilon Nitro (aka Isilon F800) Dell EMC is introducing new ideas and a change in approach. The biggest (and most obvious) change is that instead of individual 4U nodes, each with their own disk in their own enclosure, the new 4U F800 chassis design can hold up to 4 nodes in 4U and is, overall, much more customized and dense:

  • The most notable design element is the use of a midplane between compute nodes and storage media, allowing a useful abstraction between the two that can facilitate future compute upgrades, without the need to also replace storage media. It also provides flexibility as media connectivity evolves, particularly with flash. This is the driver that lets Dell EMC leverage 4 nodes with disk in a single 4U chassis.
  • Based on the above, data center density, space, and power really become optimized. We can have 4 nodes in 4U with all flash vs. 4 nodes in 16U with 144 spinning disks.
  • The media is now held in deep, narrow “slicks” that hold 3 flash drives each. There are 20 slicks per chassis, which when using 15.4 TB solid state drives, yields 924TB of raw capacity.
  • There are 4 compute nodes possible per chassis, each with an Intel E5-2697A v4 16 core processor and 256GB of ECC memory.
  • The node interconnect is now 40Gb Ethernet, but InfiniBand is available where the nodes will be incorporated into prior-generation clusters.
  • Client connectivity is 2 x 10Gb or 40Gb Ethernet per node.
  • The system is now optimized for drives formatted with 4KB sectors (4Kn).

Compared to what you get in a 4U hybrid disk/flash X410, the F800 has:

  • Almost 2X the quantity of drives, and all SSDs at that
  • 2X the number of CPU cores
  • 4X the number of network paths into the system for clients
  • Up to over 6X the raw storage capacity

Changes to the 8.1 code release for OneFS are largely targeted at support and optimization for the new hardware vs. new features, so there’s not much to touch on here. The basic architecture of OneFS remains as well, with the core tenets of configurable data protection at the file/directory level with Reed-Solomon forward error correcting codes, the same cluster interconnect approach using 40Gb Ethernet instead of InfiniBand, and so on.

From a performance perspective, our initial test configuration utilized 4 physical Linux hosts, each with a 40Gb Ethernet connection over a dedicated switch to our single-chassis, 4-node F800 system in the AHEAD Lab. Bandwidth is the most typical performance expectation for scale-out NAS in our customer base, so this is where we spent all of our initial testing time.

Using IOzone over NFSv3, we were able to exceed 10 GiB/second of aggregate read bandwidth at an IO size of 256KB without a lot of tweaking. The system works and performs as advertised, right out of the box

So what about latency?

We have additional testing in the works specific to latency, and we expect this to be improved with all-flash media on the new hardware. However, as with all distributed systems, our general expectation is that latency will not perform as well as a node-locked/non-distributed volume architecture like NetApp’s ONTAP, though additional testing will confirm this hypothesis.

Based on our testing of the F800 pre-GA hardware in the AHEAD Lab, the F800 represents a notable increase in performance and capacity per rack unit, while retaining the node-based simplicity that Isilon has always brought to the table. The data center implications of a condensed chassis design and the use of all flash, immediately bring use cases and reasons to move to the Nitro line into play for any existing Dell EMC Isilon customers, or any other clients looking at scale-out NAS/File/Hadoop/grid storage computing scenarios. We will be replacing our pre-GA Isilon Nitro box with a new F800 in our lab shortly.

Do you have a workload that you would like to try out on the F800? Let us know and we will work with you to put the AHEAD Lab at your disposal and get things started.



mm
Author: Scott Reder
Scott's 25-year career includes technology leadership roles in Fortune 100 companies. Scott served as a Technology Director for Sears and as a Storage Architect for Allstate. In addition, he served as a consultant at major technology companies, including EMC and NCR/Teradata, two California-based start-ups, Marantie (acquired by EMC), and DATAllegro (acquired by Microsoft).

Leave a Reply