Like most, I learn a lot more by doing things wrong before doing them right. Maybe, I can save someone some of my learning pain, I mean curve!

Friday, June 24, 2011

Change is Certain

Sneak Peak:  Ubuntu 10.04 LTS Server with Native zfs here I come!!!

Since my last post about 8 months ago, I have been living with my FreeBSD 8.2-RELEASE/STABLE based NAS server.  To get to some level of reliability with it, I stripped it down to the OS and SAMBA and am using it primarily for archive.  I have moved my SickBeard/SABNZBD/Twonky services to a much smaller Windows 2003 Server that was already in place as my Active Directory Domain Controller and they have been running much more stable much to the delite of both my wife and myself.

However, I recently returned from a photography trip to Utah and ran into some of my old demons, not daemons, when transferring the 40GB of photos to the server.  The memory leak associated with zfs, Samba and sendfile would render the server unconscious pretty quickly.  In the end I used rsync, bypassing Samba, to get the photos from my computer on which I process the photos to the server.  My original design here called for me working directly from the server copy and using zfs snapshots to provide Oops insurance.

Last week during some of my daily tech scanning, Thank God for Google Reader :), for the first time I began to get a feeling that Native zfs on Linux may be stable enough for me to lean on it.  Over the weekend, I backed up my NAS server, got a fresh 8GB CF Card, and loaded Ubuntu 10.04 LTS Server on it.  LTS in Ubuntu parlance means Long Term Support.  This version of Ubuntu is and will be maintained for some number of years and has the reputation for being very stable.  My install was the base Ubuntu install only adding OpenSSH and Samba as added servers. 

I then compiled and installed the zfs drivers from Darik Horn's PPA, imported my pools (zpool Ver 15, zfs Ver 3, no compression), and voila, I was in business (more details below on how to do this).  Without any tuning whatsoever, the performance test results are as follows:
Native write to zfs
dd if=/dev/zero of=/mytestfile.out bs=1k count=4194304
4194304+0 records in
4194304+0 records out
4294967296 bytes (4.3 GB) copied, 212.516 s, 20.2 MB/s

Native read from zfs
dd if=/dev/zero of=/mytestfile.out bs=64k count=65536
65536+0 records in
65536+0 records out
4294967296 bytes (4.3 GB) copied, 69.888 s, 61.5 MB/s
In short - WOW!!  If you look at my earlier entry, you will see that on release code from FreeBSD, not development code, the write performance almosts matches the FreeBSD result of 22 MB/sec and while the read code is only about 2/3rds of the FreeBSD 92 MB/sec it is still plenty fast enough.

But the real surprise was on Samba.  Using robocopy to copy the file create above down and back up to my box, I measured the following:
Read: 45 MB/sec
Write:  35 MB/sec
This exceeded the tuned read performance on FreeBSD (39 MB/sec) and matched the tuned write performance (35 MB/sec).  But the kicker here, my test on FreeBSD were only with 1 GB files on a box where I have 2 GB of RAM.  So there is no doubt a memory aid here.  The test above was with a 4 GB file with 2 GB of RAM, so the memory impact on the test is much smaller.

So the short of it is, I now have a box that in early testing is equal or slightly more performant as a NAS server than my previous efforts and doesn't contain the instabilities inherent in a zfs on FreeBSD solution when running on modest hardware (Atom 330 with 2 GB of RAM).  I think that the days of FreeBSD's dominance in the open source world as the zfs server of choice are over - flame on FreeBSD Fan bois! 

In reality, there is no doubt that FreeBSD is a good if not great OS and that they have contributed greatly to the success of zfs with respect to the masses.  But the further reality is that the OS and its zfs implementation don't work equally great across the use cases.  IMHO, they work great for small boxes with minimal loads like many home NAS servers and they work well in very large boxes with plenty of RAM for zfs where routine transfers over Samba are not 20 times the size of the RAM in the system.  But my use case of a home NAS user with the need to load large volumes of data on a fairly regular basis is not well handled.

Stay tuned for furthers stories of what I hope are future successes!

PS:  Here' the quick how to portion

1.  Install Ubuntu Server 10.04 LTS from the iso image.
2.  Take all defaults except for additional software, select OpenSSH and Samba to be added
3.  After install is complete and the server is rebooted, apt-get update
4.  apt-get install python-software-properties
5.  add-apt-repository ppa:dajhorn/zfs
6.  apt-get update
7.  apt-get install dkms
8.  apt-get install ubuntu-zfs

In theory, step 7 should not be needed as the dependency resolution in apt should take care of this.  However, I ran into a bug and this is the easy workaround.


  1. Your blog entry has been most helpful - having lost all my data in a full ZFS crash on a limited-spec FreeBSD server, I've since set up a higher-spec Ubuntu server, thanks to you. I still have a couple of problems (drives changing name on boot and the pool not being automatically mounted), but I'll see what's on the Internet.

    Great job !

  2. Your post is completely useless... your dd'ing /dev/zero. ZFS,ext{3,4} and most modern filesystems don't write zeros (as they are technically sparse). Use /dev/urandom for testing write speeds.


About Me

Houston, Texas, United States
Geek, sometimes its biting the head off of a chicken, sometimes its getting hit in the head while working on something :)