Recommendations on ZFS on FreeBSD as a NAS box?

Please note that the answers on this page are from 2009, and should not be relied on as gospel. If you have a specific question about zfs then please click the Ask a Question button and ask a specific question.

I was thinking of building a home backup system using FreeBSD 7.2 and the ZFS file system. Has anyone had any experience with that file system?

Specifically:

  • Is it possible to boot from ZFS? (Would I want to?)
  • How easy is it to add a drive?
  • How well does it handles drives of different sizes?
  • Can you add new drives on the fly (or at least with just a reboot)?
  • Would I be better served by something off the shelf?

Any other thoughts and suggestions would be welcome.

Edit:

Just to be clear I have read the FreeBSD page on ZFS. I am looking for suggestions from people with practical experience with a similar setup to what I want.


Solution 1:

I build a home FreeBSD file server using ZFS.

It is an AMD X2 3200+ with 3GB of RAM. It has a PCI Express Gig-E. The boot drive is an old 400GB and I have 4 750GB Seagte drives (one with a difference firmware version, just in case).

Booting from ZFS would have been nice (it would make install simpler), but I used the ZFSOnRoot instructions to setup the Root/OS drive with ZFS (if all the partitions are ZFS, then it doesn't need to do a fsck at boot to check the UFS filesystems). The reason that you would want this that you can then setup all of your partitions (/var, /usr, /tmp, etc.) with different options as required (such as noatime and async for /usr/obj, which will speed kernel compiles), but they will all share space from a common pool. Then you can setup a data drive and give each user a partition of their own (with different quotes and settings). You can then take snapshots (which are low cost on ZFS).

My home server has a df that looks like:
/dev/ad0s1a           1.9G    744M    1.1G    41%    /
devfs                 1.0K    1.0K      0B   100%    /dev
dozer/data            1.8T     62G    1.7T     3%    /data
dozer/home            1.7T    9.6G    1.7T     1%    /home
dozer/home/walterp    1.9T    220G    1.7T    11%    /home/walterp
tank/tmp              352G    128K    352G     0%    /tmp
tank/usr              356G    4.4G    352G     1%    /usr
tank/var              354G    2.2G    352G     1%    /var

Performance wise, copying files is real fast. The one thing that I would note is that I have been using ZFS on FreeBSD AMD64 systems that have 3-4GB and it has worked well, but from my reading, I'd be worried about running it on a i386 system that had 2GB or less of memory.

I ran of out of SATA ports on the motherboard, so I have not tried to add any new drives. The initial setup was simple, a command to create the RAIDZ and then the command to create /home, which was formatted in seconds (IIRC). I'm still using the older version of ZFS (v6), so it has some limitations (It doesn't require drives of an equal size, but unlike a Drobo, if you had 3 750GB drives and a 1TB drive, the end result will be as if you had 4 750GB drives).

One of the big reasons that I used ZFS with RAIDZ was the end-to-end checksums. CERN published a paper that documented a test they did where they found 200+ uncorrected read errors while running a R/W test over a period of a few weeks (the ECC in retail drives is expected to have a failure once every 12TB read). I'd like the data on my server to to be correct. I had a hard crash because of a power outage (someone overloaded the UPS by plugging a space heater to it), but when the system can back, ZFS came back quickly, without the standard fsck issues.

I like it, because I could then add CUPS to Samba to get a print server. I added a DNS cache and can add other software as I like (I'm thinking about adding SNMP monitoring to the desktops at my house to measure bandwidth usage). For what I spent on the system, I'm sure I could have a bought a cheap NAS box, but then I wouldn't have a 64-bit local Unix box to play with. If you like FreeBSD I'd say go with it. If you prefer Linux, then I'd recommend a Linux solution. If you don't want to do any administration, that is when I would go for the stand alone NAS box.

On my next round of hardware upgrades, I'm planning on upgrading the hardware and then installing the current version of FreeBSD, which has ZFS v13. V13 is cool because I have a battery backed up RAM disk that I can use for the ZIL log (this makes writes scream). It also has support for using SSDs to speed up the file server (the specs on the new Sun File Servers are sweet, and they get them from a ZFS system that using SSD to make the system very quick).

EDIT: (Can't leave comments yet). I pretty much followed the instructions at http://www.ish.com.au/solutions/articles/freebsdzfs. The one major change that exists in 7.X since those instructions were written was that 7.2 came out and if you have 2+ GB, you should not have to add the following three lines in /boot/loader.conf:

vm.kmem_size_max="1024M"
vm.kmem_size="1024M"  
vfs.zfs.arc_max="100M"

The instructions also explain how to create a mirror and how to put the system back into recovery mode (mount with ZFS). After playing with his instructions once or twice, I then used the ZFS Admin manual from Sun http://opensolaris.org/os/community/zfs/docs/zfsadmin.pdf to understand better what ZFS was. To create my data store, I used a modified version of the command on Page 91 to create ZFS pools. This being FreeBSD, I had to make a small change:

zpool create dozer raidz /dev/ad4 /dev/ad6 /dev/ad8 /dev/ad10

Where ad4-ad10 where found by doing dmesg |grep 'ata.*master', this are the names of the SATA hard drives on the system that will be used for the big data partition. On my motherboard, the first three ata ports (ad0-3) where the 4 PATA ports and then because each SATA port is a master, there are no old numbers.

To create the file system, I just did:

zfs create dozer/data
zfs set mountpoint=/data dozer/tank

The second command is required because I turned off default mountpoints for shares.

Solution 2:

Introduction: I finally built my system, and here are my notes, in case it helps anyone else.

Goals:

  • Build a home NAS box which can also double as my source control and internal web server.
  • Keep the cost under $1000

Specifications:

  • Must have at least one terabyte of storage
  • Must have data redundancy (RAID or something similar)
  • Must be able to replace my current aging source code control server

Design:

  • FreeBSD 7.2 (eventually to be upgraded to 8.0).
  • OS is on its own boot drive, in this case one IDE drive
  • Data is stored on six SATA drives.

We use ZFS as the file system, since it has gotten such favorable reviews. ZFS pretty much requires that we run a 64 bit OS, and likes a lot of memory, so I should get a minimum of 4Gb

Hardware:

  • ABS Aplus ABS-CS-Monolith Black SECC Steel ATX Full Tower Computer Case - 1 @ $69.99 http://www.newegg.com/Product/Product.aspx?Item=N82E16811215009

  • Western Digital Caviar Green WD5000AADS 500GB SATA 3.0Gb/s 3.5" Hard Drive 6 @ $347.94 ($57.99 ea) http://www.newegg.com/Product/Product.aspx?Item=N82E16822136358

  • XIGMATEK XLF-F1253 120mm 4 white LED LED Case Fan - 1 @ $8.99 http://www.newegg.com/Product/Product.aspx?Item=N82E16835233015

  • LITE-ON Black IDE DVD-ROM Drive Model iHDP118-08 - 1 @ $19.99 http://www.newegg.com/Product/Product.aspx?Item=N82E16827106275

  • Crucial 4GB (2 x 2GB) 240-Pin DDR2 SDRAM DDR2 800 (PC2 6400) CT2KIT25664AA800 - 1 @ $45.99 http://www.newegg.com/Product/Product.aspx?Item=N82E16820148160

  • GIGABYTE GA-MA74GM-S2 AM2+/AM2 AMD 740G Micro ATX AMD Motherboard - 1 @ $54.99 http://www.newegg.com/Product/Product.aspx?Item=N82E16813128342

  • OKGEAR 18" SATA II Cable Model GC18ATASM12 - 6 @ $11.94 ($1.99 ea) http://www.newegg.com/Product/Product.aspx?Item=N82E16812123132

  • AMD Athlon 64 X2 5050e Brisbane 2.6GHz Socket AM2 45W Dual-Core Processor Model ADH5050DOBOX - 1 @ $62.99 http://www.newegg.com/Product/Product.aspx?Item=N82E16819103298

  • KINGWIN Mach 1 ABT-1000MA1S 1000W ATX / BTX Power Supply - 1 @ $199.99 http://www.newegg.com/Product/Product.aspx?Item=N82E16817121037

  • Seagate 400GB IDE Drive - 1 @ (had in closet), but an 80GB IDE drives cost about $37.00, and 80GB is more than enough.

  • Adaptor Bracket for IDE hard drive to fit in a five inch bay - 1 @ ~ $17.00

  • Shipping - 1 @ ~ $35.00

Software: FreeBSD 7.2 - 1 @ $0.00 http://www.freebsd.org/

Total Cost: $874.81

Hardware Setup: A basic computer build, with three minor issues.

  1. The case I purchased had slots for 6 hard drives, and two 3.5 bays. I assumed the IDE could fit in one of the 3.5 bays. This was a bad assumption, and there was no reasonable way to make it work. I went and bought an adapter at Fry's for ~ $17.00, and it worked fine.

  2. The Sata cables I purchased had the 90 degree connectors, which was nice, except with six drives, there was no way to make the work. Plugging in one cable caused the inflexible part of the connector to hang over the next hard drive. I had to go to Fry's and by 5 regular SATA cables. Sadly the ones I bought at Newegg were so cheap, that it is not worth sending them back.

  3. The case points the back of the hard drives to the side of the case, and the power cables from the power supply have a stiff connector which stuck out over the edge of the case. This didn't allow me to slide the side cover back into place. I had to play around with it a bit to get it to work, and eventually ended up with the two of the modular power cables (they have four SATA plugs on each) interleaved between the drives, so that the first cable powered drives 0, 2, and 4, and the second powered 1, 3, and 5. This allowed enough flex that I could zip tie them out of the way.

OS Setup:

  1. Burned the FreeBSD 7.2 ISOs to CD. I could have used the single DVD, but I didn't have any lying around.

  2. Burned memtest86+ ( http://www.memtest.org/ ) onto a CD.

  3. Powered up the freshly built computer, and went into the bios to make sure it saw all 7 drives and the DVD-ROM. It did. Changed the boot order to make the CDROM first.

  4. Inserted the memtest86+ CD into the freshly built computer, rebooted it up, and let it run over night. Passed with no errors.

  5. Installed FreeBSD 7.2, if you are unfamiliar with this I recommend reading the following: http://www.freebsd.org/doc/en/books/handbook/install.html It does a much better job of explaining what to do than I can. Here are my specific settings:

    • Did a Standard install
    • Used the entire IDE drive for the OS
      • used the default file system layout
      • left the 6 SATA drives untouched
    • Developer install without X-Windows, since the box will be headless
    • The system is not an NFS Client or Server
    • FTP and inetd disabled
    • SSH allowed
    • Added no packages (those would be added later).
    • Added one user
  6. After install and reboot I noticed that only 4 of the 6 SATA drives were detected. I went into the BIOS and Under Integrated Peripherals change OnChip SATA Type to be AHCI, and OnChip SATA port 4/5 Type to be "SATA" Saved settings and rebooted.

  7. At this point FreeBSD detected all six drives as: ad4 ad6 ad8 ad10 ad12 ad14

  8. Get the latest from cvs using csup: csup -g -L 2 stable-supfile I had already edited the file to use the host: cvsup11.us.FreeBSD.org leaving all other information as is.

  9. Rebuilt and installed the latest kernel and world as described here: http://www.freebsd.org/doc/en/books/handbook/makeworld.html Customized my kernel (see ZFSNAS). I disabled a large set of devices, since I never plan on using SCSI, USB, PCMCIA, Serial, Parallel, etc. added the following to /etc/make.conf: CPUTYPE=athlon64 CFLAGS= -O2 -fno-strict-aliasing -pipe make -j8 buildworld

NAS Setup:

  1. Create the ZFS pool for our storage: zpool create storage raidz2 ad4 ad6 ad8 ad10 ad12 ad14

  2. Create the home filesystem on the newly created storage:

    zfs create storage/home
    cp -rp /home/* storage/home
    rm -rf /home /usr/home
    zfs set mountpoint=/home storage/home
    
  3. edit /etc.rc/conf and add the following:

    zfs_enable="YES"
    

    This mounts the ZFS file systems on bootup.

  4. Created root, samba, and perforce directories

    zfs create storage/root
    cp -rp /root/* storage/root
    rm -rf /root 
    zfs set mountpoint=/root storage/root
    zfs create storage/fileshare
    zfs create storage/perforce
    

    Unless you need more file systems on your pool, you are pretty much done with the ZFS part. See the following for more details: http://www.freebsd.org/doc/en/books/handbook/filesystems-zfs.html http://opensolaris.org/os/community/zfs/docs/zfsadmin.pdf

Ports installed:

/usr/ports/shells/bash
    make install
/usr/ports/editors/vim
    make install
/usr/ports/net/samba33
    make
    make install
    // Use all defaults un-check cups.
/usr/ports/devel/perforce
    make
    make install PERFORCE_PORT=XXXX PERFORCE_USER=p4user PERFORCE_GROUP=p4
    rm -rf /usr/local/perforce
    cd /storage/perforce/
    mkdir root
    mkdir log
    chown p4user:p4user *
    cd /storage
    chown p4user:p4user perforce 

Edited /usr/local/etc/perforce.conf as follows:
    #
    # Perforce FreeBSD configuration file
    #
    #
    # $FreeBSD: ports/devel/perforce/files/perforce.conf.in,v 1.3 2005/01/18 15:43:36 lth Exp $

    #
    # Perforce ROOT
    #
    PERFORCE_ROOT="/storage/perforce/root"

    #
    # Perforce user (it is recommended to run p4d as a non-root user)
    #
    PERFORCE_USER="p4user"

    #
    # p4d/p4p port (default: 1666)
    #
    PERFORCE_PORT="XXXX"

    #
    # p4p cache directory
    #
    PERFORCE_PROXY_CACHE="/usr/local/perforce/cache"

    #
    # p4p target server (default: perforce:1666)
    #
    PERFORCE_PROXY_TARGET="perforce:1666"

    #
    # p4d options (see man p4d)
    #
    PERFORCE_OPTIONS="-d -p $PERFORCE_PORT -v server=1 -L /storage/perforce/logs/p4d.log"

    #
    # Uncomment this line to have the server started automatically
    #
    PERFORCE_START=yes

Users Added:

user1
user2

Groups created:

sambashare
    Added user1 and user2 as members

chgrp sambashare /storage/fileshare
chmod 775 /storage/fileshare
chmod g+s /storage/fileshare

Samba Configuration:

Samba configuration file:
#################
    [global]
       workgroup = USERLAN
       server string = ZFS NAS
       security = user
       hosts allow = 192.168.1. 127.
       log file = /usr/local/samba/var/log.%m
       max log size = 50
       passdb backend = tdbsam
       dns proxy = no

    [user1share]
       comment = user1 share
       path = /storage/fileshare
       valid users = user1 user2
       public = no
       writable = yes
       printable = no
       create mask = 0765
#################

pdbedit -a -u user1 
    # followed prompts
pdbedit -a -u user2 
    # followed prompts

Solution 3:

  • Is it possible to boot from ZFS? (Would I want to?)

I see no reason why you'd want to, I'd think the Snapshot support is only mature enough in OpenSolaris so that you can switch back to an older version and boot that (buth that's actually just wild guessing).

  • How easy is it to add a drive?

Add as in expand a striped pool? Just add a drive to the pool, that's about it. Consider the implications of your next question thou.

  • How well does it handles drives of different sizes?

You could use it as a stripe and tell ZFS to keep n copies of a file. So you could use the full storage availability you have and still get decent redundancy

  • Can you add new drives on the fly (or at least with just a reboot)?

Replacing Devices in a Storage Pool I guess this is the recommended solution, the easiest way to find out how well this works on FreeBSD is probably to give it a try.

  • Would I be better served by something off the shelf?

Have you considered FreeNAS (Roadmap) 0.70 seems about to be released and will support ZFS.

You will save yourself the hassle of all the framework and get a relatively nice to use GUI for free with it.

Solution 4:

I have servers with FreeBSD+ZFS(on 7.2-STABLE and 8.0-CURRENT), not in production thou.

Booting from ZFS is described here http://lulf.geeknest.org/blog/freebsd/Setting_up_a_zfs-only_system/

Adding drives on the fly is as easy as typing "zpool add mypool da7", new drive is usable right after this; you also can add whole bunch of drives in stripe, mirror, raidz(improved raid-5) or raidz2(improved raid-6)

Drives of different sizes can be placed in pool but can't be used in mirror/stripe/raid(if i recall correctly, only smallest hdd space will be usable then)

(Open)Solaris have support of ZFS right out of the box

Solution 5:

There's a nice thread on building a home ZFS NAS over at ArsTechnica.