Create a FreeNAS SAN Solution

In today’s world, data is important, and we have amassed a lot of it.  We need that data to be highly available, highly reliable, and all at our finger tips.  In the world of information technology (IT), this translates to every data center needing some form of centralized storage.  With so many enterprise class solutions to choose from, all with some level of complexity and possibly price prohibitive, we needed a storage solution to develop and maintain storage area network (SAN) experience and our lab needed a centralized storage device.

Enter FreeNAS.

http://www.freenas.org/

FreeNAS, developed by iXsystems, is the community side of their commercial product TrueNAS.  It’s a FreeBSD and ZFS based storage operating system, with enterprise class features and capabilities.

Implementation

There are many ways to design and build a FreeNAS based storage solution.  FreeNAS’ flexibility is one of it’s strengths, but like any IT systems, a poor design will most certainly yield poor results.  Building a storage system for low-latency and high IO performance has a whole different set of requirements from that of a system built to do high throughput media storage.  Our design takes into account FreeNAS and general storage best practices as much as possible on a lab data center’s budget.

Controller Physical Platform

Our physical platform for running FreeNAS is a 2U SuperMicro 826 chassis, SuperMicro X8DTN+ Motherboard, Dual Xeon L5520 2.27 Ghz processors, and 72GB of RAM.  This chassis is rack mountable, has rails available, and solid.  It has slots for twelve large form-factor (LLF) drives.  Drive adapters allow for the use of small form factor (SFF) drives when needed.  It has dual, hot swappable power supplies, and replaceable fan units.

FreeNAS forums recommend higher clock cycle processors over core/socket quantity.  This is due to Samba’s threading.   In our lab’s case, the low power consumption of the Xeon L5520 (60 watts) was an acceptable trade off.  With FreeNAS/FreeBSD/ZFS, RAM (know as ARC) is the system’s primary cache.  System RAM should be maximized before utilizing any L2ARC (additional cache).  FreeNAS has many build guides and recommendations for system builders addressing many of these design aspects.

One of the other key aspects of a good system is some form of baseboard management controller (BMC) or intelligent platform management interface (IPMI).  In SuperMicro’s case like other enterprise class products, the IPMI allows remote power control and console access.

Our system is using two SAS HBA for storage connectivity.  An IBM M1015 allows for connecting two internal SAS 8087 ports.  An HP 221 allows for connecting two external SAS 8088 cables.

The controller is also configured with a QLA X providing dual 4GB fibre channel ports and two gigabit network ports for iSCSI, allowing the system to be simultaneously connected to redundantly to both FC and iSCSI storage fabrics.

Disk Expansion Enclosures

We are currently using three different disk enclosures.   This provides a range of equipment to derive performance testing data.

The SuperMicro 826 chassis supports being used as a stand alone drive enclosure by the use of a power control board instead of a motherboard.  The SuperMicro part (CSE-PTJBOD-CB1) replaces the power and fan control functions of a typical motherboard.  We connected this enclosure to an IBM M1015 internal HBA with an internal 8087 to external 8088 adapter.

HP D2700
HP D2700 Back

The HP D2700 is a 6G SAS 8088 connected enclosure that supports 25 2.5″ dual port SFF drives.  It’s currently populated in our system with 12 300GB SAS drives. https://www.hpe.com/us/en/product-catalog/options/pip.hpe-d2700-disk-enclosure.3954790.html

HP MSA 70
HP MSA 60 Back

The HP MSA70 is a 3G SAS 8088 connected enclosure that supports 25 2.5″ single port SFF drives.  It’s currently populated in our system with 24 146GB SAS drives. https://support.hpe.com/hpsc/doc/public/display?docId=emr_na-c00864149

Both the MSA70 and the D2700 are connected to an HP 221 dual 8088 external port SAS HBA.  https://support.hpe.com/hpsc/doc/public/display?docId=emr_na-c03231137

Storage Pool Architecture

ZFS arranges storage into ‘pools’ in the same way traditional RAID creates arrays.  These pools have vDEVs or virtual devices, which share the load of IO being requested of the pool.  Lastly, a vDEV is comprised of hard drives arranged in the required redundancy configuration such as mirrors, Z1, Z2, Z3.  There’s a great presentation created by one of the FreeNAS forum users: https://drive.google.com/file/d/0BzHapVfrocfwblFvMVdvQ2ZqTGM/view

Pools used for high IO scenarios like our VMware datastores are arranged into mirrored pairs, similar to RAID 10.  This gives a higher number of vDEVs to absorb the IO requested of the pool, thus increasing it’s performance at the sacrifice of available capacity.

Our Supermicro 826 enclosure is loaded with twelve 2TB 7200RPM drives, configured into one pool with 6 mirrored pairs (or vDEVs).  The command “zpool list -v Tier2-Group2” shows the configuration of the pool named ‘Tier2-Group2’.  The pool also has two stripped SSDs for cache, a very common technique used by many storage venders to make high demand data more rapidly available.

The HP enclosures are arranged similarly, with all the loaded SAS disks being configured in pairs, allowing for high IO throughput.  The addition of cache devices vastly increases their raw performance as well.

Storage Connectivity

Our SAN’s connectivity is a combination of 4Gb fibre channel (FC) fabrics and iSCSI.  We’re using a Brocade 200E for dual FC fabrics, allowing MPIO up to 800MB/s of throughput.  iSCSI connectivity is primarily being used to offload backup traffic.

The Brocade 200E is a dated fibre channel switch in contrast to current models with speeds reaching 128GFC.  For our data center, its performance and capability is entirely adequate, and allows establishing solid zoning techniques.

Solution Summary

As a whole, this storage solution has out performed any other NAS based product or in-house solution to date.  After assessing the solution holistically using the VMware IO Analyzer, it revealed the SAN supports 8K/16K workloads of up to roughly 32,000 to 34,000  IOPS, and is entirely adequate for a data center with modest requirements.

Leave a Reply

Your email address will not be published. Required fields are marked *