Benchmarking HOMER+FreeNAS: Introduction and methology

There are many claims about the performance of ZFS. As I’m about to replace a nearly 4 years old file server, I decided to run thorough tests to see how the different pool configurations available to my 12-drive system will actually perform. I’ve tested all the vdev types (single drive, mirror, raidz[1-3]), and stripes thereof. For more information about the involved hardware, please see the full specifications of the server used for these benchmarks.

I’ve used FreeNAS 9.2-RC2 (x64), and IOZone v3.420 (compiled for 64-bit mode, build: freebsd) for these benchmarks. I disabled SWAP in the FreeNAS settings prior to creating any test pool, in an effort to prevent arbitary i/o from skewing results. I also disabled the ‘atime’ property on the test pool, to reduce unnecessary I/O. The benchmarks were run inside a 64-bit portjail, nullfs-mounting the test pool to /mnt inside the jail. SSHD is the only started non-default service. The jail was running on a ZFS pool consisting of a SSD mirror.

Continue reading

ZFS: An explanation of different pool layouts

This post contains some examples and short descriptions of different ZFS pool layouts. I’m assuming you’re already familiar with the features of ZFS. If not, you may want to check out the Wikipedia article on ZFS.

General information

It’s recommended to never have more than 9 drives in a single vdev, as this will have a noticeable performance impact, especially when resilvering. Resilvering may become so slow that it’s likely you’ll lose additional drives while the process is running, potentially causing data loss. It’s therefore recommended to use multiple vdevs in the same pool when you want to use more than 9 drives.

Continue reading

FreeBSD 9 file server: Wiggum version 2.0

I’ve recently upgraded Wiggum (my file server) from FreeBSD 8.0 to FreeBSD 9.0.  Since I had made some mistakes when originally setting up Wiggum two years ago, I went for a complete reinstall – and recreation of the zpools. This blog entry is a step-by-step guide for how I did the initial installation and setup.

Continue reading

FreeBSD: Filesystem Performance – The Setup

I’ve run a series of benchmarks on my prototyping server to determine performance differences between a variety of configurations:

  • Single drive UFS2
  • Single drive ZFS
  • ZFS 3-way mirror
  • ZFS stripe across 3 drives
  • ZFS RaidZ across 3 drives
  • ZFS RaidZ across 3 drives, plus a SSD as cache.

All the details on configuration and benchmark methods are below!
Continue reading

FreeBSD ZFS: Advanced format (4k) drives and you

Historically, hard drives have had a sector size of 512 bytes. This changed when drives became large enough for such a small sector size to make the overhead of keeping track of these sectors consume too much storage space, making hard drives more expensive to produce than strictly necessary. Many modern drives are tagged as “advanced format” drives; Right now, this means they have a sector size of 4096 bytes (4KiB). This includes most if not all SSDs, and most 2TB+ magnetic drives.

If you create a partition on such a drive without ensuring the partition begins on a physical sector, the device firmware will have to do some “magic” which takes more time than not doing the magic in the first place, resulting in reduced performance. It is therefore important to make sure you align partitions correctly on these devices. I generally align partitions to the 1MiB mark for the sake of being future proof. Even though my current drives have 512B and 4KiB sector sizes, I don’t want to encounter any problems when larger sector sizes are introduced.

Although ZFS can use entire devices without partitioning, I use GPT to partition and label my drives. My labels are generally reference to physical location in the server. For example, Bay1.2 would mean the drive is located in bay one slot two. This makes it so much easier to figure out which drive to replace when the need arise.

Continue reading