I’m using FreeBSD 10.0-RELEASE on my file server, which will double as my package builder. I’d prefer to run Poudriere inside a jail so that all its binaries and configs are confined there, but this is not a supported configuration, and Poudriere requires so many permissions the security benefits would be minimal, and it still encounters trouble.
This shouldn’t be a problem though, as Poudriere won’t expose any services, and the packages will be published by a jail utilizing read-only nullfs mounts.
This benchmark have three pool configurations: Single drive, a 2-way mirror, and a 3-way mirror. Please see the previous posts on testing methology and hardware specifications, if you haven’t already.
I expect the single drive to have the best write performance, followed by the 2-way then 3-way mirror. I also expect the mirrors to have better read performance, as there’s more drives to read from. I also expect there to be a noticeable performance penalty for record sizes smaller than ZFS’s configured recordsize of 128k. I expect error margins of +/- 10%. The random reads/writes should get almost linearly better performance with the larger record sizes.
There are many claims about the performance of ZFS. As I’m about to replace a nearly 4 years old file server, I decided to run thorough tests to see how the different pool configurations available to my 12-drive system will actually perform. I’ve tested all the vdev types (single drive, mirror, raidz[1-3]), and stripes thereof. For more information about the involved hardware, please see the full specifications of the server used for these benchmarks.
I’ve used FreeNAS 9.2-RC2 (x64), and IOZone v3.420 (compiled for 64-bit mode, build: freebsd) for these benchmarks. I disabled SWAP in the FreeNAS settings prior to creating any test pool, in an effort to prevent arbitary i/o from skewing results. I also disabled the ‘atime’ property on the test pool, to reduce unnecessary I/O. The benchmarks were run inside a 64-bit portjail, nullfs-mounting the test pool to /mnt inside the jail. SSHD is the only started non-default service. The jail was running on a ZFS pool consisting of a SSD mirror.
This post contains some examples and short descriptions of different ZFS pool layouts. I’m assuming you’re already familiar with the features of ZFS. If not, you may want to check out the Wikipedia article on ZFS.
It’s recommended to never have more than 9 drives in a single vdev, as this will have a noticeable performance impact, especially when resilvering. Resilvering may become so slow that it’s likely you’ll lose additional drives while the process is running, potentially causing data loss. It’s therefore recommended to use multiple vdevs in the same pool when you want to use more than 9 drives.
I’ve recently upgraded Wiggum (my file server) from FreeBSD 8.0 to FreeBSD 9.0. Since I had made some mistakes when originally setting up Wiggum two years ago, I went for a complete reinstall – and recreation of the zpools. This blog entry is a step-by-step guide for how I did the initial installation and setup.
These are the results of this weekends benchmarking! I’ve tested a single-drive UFS2 file system, and compared it to several ZFS configurations (single and multi-drive). For all the juicy details on configuration and testing methods, please see FreeBSD: Filesystem Performance – The Setup.
I’ve run a series of benchmarks on my prototyping server to determine performance differences between a variety of configurations:
- Single drive UFS2
- Single drive ZFS
- ZFS 3-way mirror
- ZFS stripe across 3 drives
- ZFS RaidZ across 3 drives
- ZFS RaidZ across 3 drives, plus a SSD as cache.
All the details on configuration and benchmark methods are below!