FreeBSD jail host with multiple local networks

My jail host is running FreeBSD 10.0-RELEASE and is directly connected to two local networks. One is my LAN, and the other is a DMZ for various internet-facing services. I don’t want my DMZ jails to be able to send network traffic directly to my LAN, and I need to set a default route for a jail depending on which network its IP-address resides for them to communicate outside of their local subnet.

To solve this, I’m going to use multiple routing tables, also known as FIB, which are manipulated with the setfib utility. I know I could have used the experimental virtual network stack (VNET), which is awesome, but I opt not to as it still has some problems with stability and memory leaks. EDIT: It seems that jails are able to use the ‘setfib’ command as well, so a firewall might be necessary to disallow communication between certain jails and destinations.

Continue reading

FreeNAS ZFS Benchmarks: 1 Data Drive

This benchmark have three pool configurations: Single drive, a 2-way mirror, and a 3-way mirror. Please see the previous posts on testing methology and hardware specifications, if you haven’t already.

I expect the single drive to have the best write performance, followed by the 2-way then 3-way mirror. I also expect the mirrors to have better read performance, as there’s more drives to read from. I also expect there to be a noticeable performance penalty for record sizes smaller than ZFS’s configured recordsize of 128k. I expect error margins of +/- 10%. The random reads/writes should get almost linearly better performance with the larger record sizes.

Continue reading

Benchmarking HOMER+FreeNAS: Introduction and methology

There are many claims about the performance of ZFS. As I’m about to replace a nearly 4 years old file server, I decided to run thorough tests to see how the different pool configurations available to my 12-drive system will actually perform. I’ve tested all the vdev types (single drive, mirror, raidz[1-3]), and stripes thereof. For more information about the involved hardware, please see the full specifications of the server used for these benchmarks.

I’ve used FreeNAS 9.2-RC2 (x64), and IOZone v3.420 (compiled for 64-bit mode, build: freebsd) for these benchmarks. I disabled SWAP in the FreeNAS settings prior to creating any test pool, in an effort to prevent arbitary i/o from skewing results. I also disabled the ‘atime’ property on the test pool, to reduce unnecessary I/O. The benchmarks were run inside a 64-bit portjail, nullfs-mounting the test pool to /mnt inside the jail. SSHD is the only started non-default service. The jail was running on a ZFS pool consisting of a SSD mirror.

Continue reading

ZFS: An explanation of different pool layouts

This post contains some examples and short descriptions of different ZFS pool layouts. I’m assuming you’re already familiar with the features of ZFS. If not, you may want to check out the Wikipedia article on ZFS.

General information

It’s recommended to never have more than 9 drives in a single vdev, as this will have a noticeable performance impact, especially when resilvering. Resilvering may become so slow that it’s likely you’ll lose additional drives while the process is running, potentially causing data loss. It’s therefore recommended to use multiple vdevs in the same pool when you want to use more than 9 drives.

Continue reading

FreeBSD 9 file server: Wiggum version 2.0

I’ve recently upgraded Wiggum (my file server) from FreeBSD 8.0 to FreeBSD 9.0.  Since I had made some mistakes when originally setting up Wiggum two years ago, I went for a complete reinstall – and recreation of the zpools. This blog entry is a step-by-step guide for how I did the initial installation and setup.

Continue reading

FreeBSD: Filesystem Performance – The Setup

I’ve run a series of benchmarks on my prototyping server to determine performance differences between a variety of configurations:

  • Single drive UFS2
  • Single drive ZFS
  • ZFS 3-way mirror
  • ZFS stripe across 3 drives
  • ZFS RaidZ across 3 drives
  • ZFS RaidZ across 3 drives, plus a SSD as cache.

All the details on configuration and benchmark methods are below!
Continue reading