Benchmarking HOMER+FreeNAS: Introduction and methology

There are many claims about the performance of ZFS. As I’m about to replace a nearly 4 years old file server, I decided to run thorough tests to see how the different pool configurations available to my 12-drive system will actually perform. I’ve tested all the vdev types (single drive, mirror, raidz[1-3]), and stripes thereof. For more information about the involved hardware, please see the full specifications of the server used for these benchmarks.

I’ve used FreeNAS 9.2-RC2 (x64), and IOZone v3.420 (compiled for 64-bit mode, build: freebsd) for these benchmarks. I disabled SWAP in the FreeNAS settings prior to creating any test pool, in an effort to prevent arbitary i/o from skewing results. I also disabled the ‘atime’ property on the test pool, to reduce unnecessary I/O. The benchmarks were run inside a 64-bit portjail, nullfs-mounting the test pool to /mnt inside the jail. SSHD is the only started non-default service. The jail was running on a ZFS pool consisting of a SSD mirror.

Since it is generally said you should run benchmarks with a dataset two times larger than the amount of available RAM (assuming filesystem cache can use all of the memory), I did an initial run with a dataset of 2x memory size. However, this gave too many ARC hits, skewing the results. It also took too long to complete the benchmarks with such a large dataset. I therefore restricted ARC size to 2GB, and benchmarked with a dataset of 6GB (3x ARC size, or ‘file system cache’ in more general terms). The system still had access to all of its memory, 64GB. As far as I know, this additional unused memory should not affect the benchmarks.

I have also not used log or cache devices, which would improve sync writes and all reads respectively, because I was interested in the performance of  the various pool configurations, and not that of the speedy SSDs. My rationale is that the log and cache devices would impact all configurations equally, and it would therefore be better to remove them from the equation. I may follow up with tests including such devices in the future.

The results may be skewed by nullfs performance, whatever overhead may be caused by running inside a jail, and potentially factors unknown to me related to large amounts of RAM being available to the system as a whole. There may also be some impact from the SSD mirror pool, as ARC was not disabled on this pool.

Command line: iozone -a -s 6g -y 64
IOZone report of configuration parameters:

Auto Mode
 File size set to 6291456 KB
 Using Minimum Record Size 64 KB
 Command line used: iozone -a -s 6g -y 64
 Output is in Kbytes/sec
 Time Resolution = 0.000001 seconds.
 Processor cache size set to 1024 Kbytes.
 Processor cache line size set to 32 bytes.
 File stride size set to 17 * record size.

Benchmarks

  • 1 data drive
    • Single drive
    • mirror, 2 drives
    • mirror, 3 drives
  • 2 data drives
    • 2x mirror, 2 drives each
    • 2x mirror, 3 drives each
    • RaidZ, 3 drives
    • RaidZ2, 4 drives
    • RaidZ3, 5 drives
  • 4 data drives
    • RaidZ, 5 drives
    • RaidZ2, 6 drives
    • RaidZ3, 7 drives
    • 2x RaidZ, 3 drives each
    • 2x RaidZ2, 4 drives each
    • 2x RaidZ3, 5 drives each
  • 8 data drives
    • RaidZ2, 10 drives
    • RaidZ3, 11 drives
    • 2x RaidZ, 5 drives each
    • 2x RaidZ2, 6 drives each

2 thoughts on “Benchmarking HOMER+FreeNAS: Introduction and methology

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s