Personal Computing

Network Attached Storage

The Moocher operated his home network with host attached storage for a number of years. Two Drobo storage arrays provide storage for media and serve as a Time Machine spool volume. These machines have served well but are running into age-related limitations. It is time to consider replacing them in the near future.

IXsystems FreeNAS-mini storage array

Revision History

  1. 12/28/2016: Add references, revise summary of commercial products, revise home-brew discussion to reflect actual costs after actually adding them up.


These references include a selection of NAS products commonly used in home and small office environments. They appear in alphabetical order. The last is Brian Moses’s blog in which he writes about building small scale FreeNAS servers suitable for home. Brian tends to pick high end bits. These are comparable to the commercial vendor’s 6 bay small office systems and are price competitive.

  1.  retrieved 12/28/2016
  2. retrieved 12/28/2016
  4. retrieved 12/28/2016
  6. retrieved 12/28/2016

Direct and Network Attached Storage

Storage comes in two categories, directly attached storage either internal to or external to a host and network attached storage sharable by multiple hosts. Drobo comes in both varieties. My current large scale storage is directly attached. Even with just me, this has its drawbacks.

  • It is not laptop computer friendly. Directly attached storage is tied to a host.
  • Device interfaces become obsolete long before a device is full.
  • Apple’s new computers are I/O interface limited. Apple is reducing the number to reflect common usage.
  • The role of storage is changing. My original Drobo served as a Time Machine storage. My second was purchased to move media off the host’s internal drives. Now, I need media storage to be accessible to external media servers (nVidia Media Shield).

What’s out there

The references list a collection of commonly recommended NAS manufacturers plus FreeNAS, a free open source software project sponsored by IXsystems, a provider of business and small enterprise scale storage systems. These systems fall into three groups.

  1. Specialized storage appliances, like Drobo built by EMC alumni, using proprietary hardware and software.
  2. Commercial RAID storage systems built from the Linux operating system, Linux file systems, md multi-disk kernel add-ins, kernel RAID add-in, and EXT file system or BTRFS file system in the case of NetGear.
  3. FreeBSD and FreeNAS, far and away the favorite for home brew NAS.

What I quickly found was that DROBO remains about a simple, hassle-free user experience, the mass market soho/business vendor’s devices were about features with no mention of the failure recovery experience (until recently, bad). Meanwhile, FreeNAS was about reliability, availability, and scalability but some user skill and knowledge is required to configure and maintain a FreeNAS system.

In reviewing the websites, I came to the following conclusions.

  • It is impossible to tell if the SOHO vendors have hardened their systems appropriately for continuous operation. The key improvement is to use ECC memory.
  • It is impossible to tell if they have hardened their systems against power interruption. Are commits of data to disk atomic? Does it all go or none of it? Can the file system be left in an inconsistent state.
  • It was impossible to tell if the commercial systems that were not Drobos or FreeNAS/TrueNAS systems could detect and recover from bit rot in data at rest.

What is Drobo

Drobo is a data storage device using a proprietary file system that allows a volume to span multiple physical devices and to be single or double disk failure resistant depending on the model. Four disk devices offer single disk failure protection. Five disk models offer dual disk failure protection. (If all disks are installed.)

Drobo is designed to be plug and play with minimal configuration required. Front panel indicators show disk health, indicate disks that have failed, or disks that are full. Drobo allows mixing of disk sizes, automatically configures itself, and automatically detects disk replacement and rebuilds data integrity. Drobo periodically reads each file to confirm no corruption has occurred. When detected, Drobo corrects corrupt blocks.

My second generation Drobos predate today’s large disks. They also predate today’s interfaces. They support Firewire 400/800 and USB-2. Firewire is obsolete and USB-2 is obsolescent. In the 2018-2019 time frame, they will need replacement disks. They were designed 10 years ago before today’s larger disks were available.

Drobo remans the product of choice for applications requiring reliable no-fuss no-frills storage. It detects and corrects bit rot, alerts you to failed drives, and failed drive replacement is as easy as replacing a book in a book shelf.

 NetGear, QNAP, SYNOLOGY, et al

By and large, systems based on Linux and EXT 2-4 file systems, are unable to correct errors in data at rest. EXT 2/3/4 keep data block checksums in the blocks themselves. This allows EXT to detect corruption but EXT has no way of knowing whether it was the data or the checksum that was corrected. The Linux RAID layer keeps a separate chunk of ECC data that allows recovery of the block. Depending on the RAID level configured, one or two sets of ECC data are kept. The data is spread across the disks in a way that protects it from a single disk failure.

NetGear has taken point on Linux with BTRFS. BTRFS is a ZFS-like file system that is designed for large scale multi-volume fault tolerant file systems. It is just now becoming mature enough to use as a root file system. NetGear is the first to commit its entire storage line to BTRFS. QNAP and SYNOLOGY are in transition from EXT4 to BTRFS but it is impossible to tell from web site marketing materials which are BTRFS.


IXsystems develops TrueOS, FreeNAS and OpenZFS for use in its own small to mid-sized enterprise storage products. They serve the chassis scale to several rack scale market with high quality professionally designed systems built to customer requirements. IXsystems makes both TrueOS and FreeNAS available for hobby and small business use under the BSD license. FreeNAS is composed of a FreeBSD kernel with OpenZFS file system.

FreeNAS is designed to run from a thumb drive rather than the storage disks. This protects the OS and system configuration data from disk failures. It is this feature that allows FreeNAS to run through a disk failure  and to know how to rebuild the array when one of its member devices fails. To my knowledge, this is unique to FreeNAS. The others may cache the configuration data on the manufacturer’s web portal but it is not mentioned as a selling feature. As I said before, convenient recovery is not a selling feature outside the Drobo and FreeNAS products.


TrueOS like MacOS and iOS descends directly from the original University of California port of Bell Labs UNIX to demand page virtual VAX 11/780 hardware. Through the years, this software has been maintained and enhanced for the robustness needed to support non-stop professional applications. Using FreeBSD offers the following advantages.

  • Very mature
  • Additions are reviewed for security issues
  • Widely used for research filesystem development
  • Provides robust isolated environments called Jails provide light weight virtualization similar to containers but each environment has its own OS instance.
  • Provides robust industry standard user credentials management and user authentication (NIS, LDAP, Kerberos)
  • Provides user standard access control lists to control and grant resource access to users and processes.
  • One can safely run network services on a FreeNAS server. Many folks run their Plex servers on their FreeNAS host.
  • IXsystems offers several back-up services packaged to run in Jails including Bacula (disk back up system) and CrashPlan client.
  • FreeNAS happily supports Time Machine and keeps its appetite for storage within bounds.


OpenZFS is an open source implementation of Sun Microsystems zeta-bit file system (ZFS). Sun originally offered the file system as open source. Once acquired by Oracle, future versions became proprietary. Today IXsystems maintains ZFS as OpenZFS and uses it in its enterprise storage products.

OpenZFS integrates physical device, logical device, files, and data redundancy in a single logical framework that replaces the logical volume manager, multi-device manager, and kernel RAID. This allows OpenZFS to seamlessly and reliably provide the following capabilities to the host OS.

  • Multi-spindle and multi-controller file systems possible
  • Single disk failure protection
  • Two disk failure protection
  • Mix of disk sizes supported with some space loss on large disks
  • Reliable disk replacement and reconfiguration
  • Snapshots and roll back to earlier file system states
  • Recovery of deleted files from an earlier snapshot
  • Expansion while operating by replacing smaller disks with larger
  • Expansion while operating by adding disks
  • UNIX, iSCSI, AFP, CIFS, and NFS interfaces supported.

It takes about 10 years for a file system to become robust. OpenZFS has been in service that long in corporate, university, and hobbyist environments.

RAID’s Achilles Heel

With many RAID systems, it was the operator’s sad experience to discover that a disk had failed. When the failed disk is replaced, the RAID system has to rebuild its contents from the data on the remaining disks. All too often, a second failure would happen during the rebuild leaving the operator in a bad place. That can happen with any of the systems described here. That’s why most making home-brew systems configure ZRAID2 for 2 disk failure protection.

Home-brew or Buy?

Can you make one yourself? Yes. Are the savings worth a day or two of time? That depends but it appears that you can save about $300 on a home brew array and you’ll know it can be repaired down the road.

Commercial arrays must be returned for depot repair which can take 30 days with ground shipment and standard service. Or replaced with unknown possibility of data recovery. Not good when the array is your backup device.

The Off the Shelf Option

IXsystems makes a FreeBSD distribution called FreeNAS that is available for unrestricted hobby and commercial use. IXsystems provides hardware selection guidance and a disk compatibility checker to allow those so inclined to home brew a small scale storage array.

They also make two small storage appliances for the home and small office market, FreeNAS Mini (4 bays, $1000) and FreeNAS Mini XL (8 bays, $1400). These are top shelf systems with robust power supplies, mother boards, and ECC memory.

Home Brewing

Small NAS systems like this are easily assembled and commissioned in a day. Once built, they should be run in for a day or two to sort out infant mortality in the disks. This needn’t be anything fancy. Rsync the data over several times to force writing and reading. If a controller was DOA on the mother board, his will find it. Run new and old systems in parallel if you can for a month.

The hard part is in planning the build, selecting a system board, and selecting memory compatible with it. Use the mother board maker’s memory selection tool and buy what it recommends.

The second tricky part is to ensure the case has enough disk bays. Plan these out observing the recommendations below.

The third bit is to ensure that everything fits if using a compact case. Mounting holes and cable access can vary from one to another so watch YouTube builds with your chosen case to identify potential issues before ordering.

How Many Disks?

Conservative builds plan for 2 disk failure protection using RAIDZ2. OpenZFS RAIDZ2 makes best use of 4, 6, or 10 equally sized disks. Cases typically have 4, 6, or 8 drive slots. When selecting a case, ensure that it will hold the number of drives you plan to use. Two 2.5 inch bays is a bonus if you are planning to add read and write cache SDD’s down the road. Most small systems have low traffic rates and will not benefit significantly from the increased cost. Adding an SDD write cache reduces write vulnerability to power interruptions but so does an UPS.

Most commercial systems have hot spare disks installed. In my experience, live disks last about 5 years median. With 4 in the box, 2 will die in year 5. Luckily, these failures have been far enough apart to allow replacement of the first to complete before the second took the plunge. If you have a hot spare in the box, there is a significant probability that it will be your second casualty. I’ve always been able to mail order and replace in my Drobos without the array rebuild being interrupted by a second failure.

Component Selection


The trick is to be very careful about selection of motherboard and processor. Most boards are made for gamers. As such, they aim for high performance rendering rather than continuous operation, reliability, availability, and scalability. Disk interfaces are limited and ECC memory is not supported. Gamer boards are designed to allow hot rodders to overclock processors and memory, etc features that won’t be used in a reliability-focused system.

Server boards are identified by looking for these things: no over-clocking bits, no fancy audio, multiple disk interfaces, ECC memory support, and IP bare-metal intelligent platform management interfaces (IPMI) in the startup firmware.

Generally, SuperMicro specializes in motherboards for file and database servers and HPC modeling and simulation applications. The other mother board makers cater mostly to hot rod builders. Of those, ASROCK has two boards based on Intel Avoton parts that are useful. These boards may be late in product life as they’ve been on the market for 3 years.

High end Intel Atom embedded system processors and low end Xeon servers make a nice FreeNAS system. If you plan to encrypt the volume, a newer part with AES hardware support is helpful. ECC memory, preferably unbuffered memory, is strongly recommended. Unbuffered ECC memory is about 10-20% more than non-ECC memory making it affordable. Registered memory sees low volumes making it costly.

Revisiting the pricing using the Brian Moses article as a guide, it is possible to pick the core system for a cost on the order of $700. Disks are extra and Brian included 7 at $140 each in his total.

Sticky Patches

First, several builders reported receiving the ASROCK boards with ill-behaved Marvel disk controllers. Seller and ASROCK stood by these parts. Know your RMA policy.

Second, Brian ran into power supply fit issues choosing case and power supply independently. Case manufacturer allowed room for 1U power supplies but the mount points did not align with the supply Brian bought. Brian made up an internal shelf to properly support the supply he bought. One way to avoid this is to buy case and power supply from a common provider. Fractal Design (Node 304) and Silver Stone (DS380) supply cases and power supplies suitable for NAS builds.

The third issue, not mentioned by Brian, is the video on the ASROCK C2750D4I. It’s VGA in an HDMI world. Brian used the IPMI interface for his start up. If you lack a VGA display, you’ll need a way to discover the host’s IPMI interface IP address without a display.

If you are planning to use the host to run an application needing a user interface in a  Jail, adding a low end Nvidia interface would be a good idea. Nvidia provides FreeBSD drivers for its current products.



By davehamby

A modern Merlin, hell bent for glory, he shot the works and nothing worked.