FreeBSD jail server with ZFS clone and jail.conf

I’ll be using FreeBSD 10.0 AMD64 with root on ZFS, but you can follow these instructions as long as you have a ZFS pool on the system. It is assumed that the system is already installed and basic configuration is complete.

It should be noted that the benefit from using ZFS clones will more or less vanish if you do a major ‘world’ upgrade on the jail, for example upgrading from FreeBSD 9.2 to FreeBSD 10.0. This won’t be a problem for my setup as I’ll eventually get around to configuring sysutils/py-salt to automatically deploy my jails, and I’ll post about it when I do.
Continue reading

FreeBSD package builder with Poudriere

I’m using FreeBSD 10.0-RELEASE on my file server, which will double as my package builder. I’d prefer to run Poudriere inside a jail  so that all its binaries and configs are confined there, but this is not a supported configuration, and Poudriere requires so many permissions the security benefits would be minimal, and it still encounters trouble.

This shouldn’t be a problem though, as Poudriere won’t expose any services, and the packages will be published by a jail utilizing read-only nullfs mounts.

Continue reading

FreeBSD jail host with multiple local networks

My jail host is running FreeBSD 10.0-RELEASE and is directly connected to two local networks. One is my LAN, and the other is a DMZ for various internet-facing services. I don’t want my DMZ jails to be able to send network traffic directly to my LAN, and I need to set a default route for a jail depending on which network its IP-address resides for them to communicate outside of their local subnet.

To solve this, I’m going to use multiple routing tables, also known as FIB, which are manipulated with the setfib utility. I know I could have used the experimental virtual network stack (VNET), which is awesome, but I opt not to as it still has some problems with stability and memory leaks. EDIT: It seems that jails are able to use the ‘setfib’ command as well, so a firewall might be necessary to disallow communication between certain jails and destinations.

Continue reading

FreeNAS ZFS Benchmarks: 1 Data Drive

This benchmark have three pool configurations: Single drive, a 2-way mirror, and a 3-way mirror. Please see the previous posts on testing methology and hardware specifications, if you haven’t already.

I expect the single drive to have the best write performance, followed by the 2-way then 3-way mirror. I also expect the mirrors to have better read performance, as there’s more drives to read from. I also expect there to be a noticeable performance penalty for record sizes smaller than ZFS’s configured recordsize of 128k. I expect error margins of +/- 10%. The random reads/writes should get almost linearly better performance with the larger record sizes.

Continue reading

Benchmarking HOMER+FreeNAS: Introduction and methology

There are many claims about the performance of ZFS. As I’m about to replace a nearly 4 years old file server, I decided to run thorough tests to see how the different pool configurations available to my 12-drive system will actually perform. I’ve tested all the vdev types (single drive, mirror, raidz[1-3]), and stripes thereof. For more information about the involved hardware, please see the full specifications of the server used for these benchmarks.

I’ve used FreeNAS 9.2-RC2 (x64), and IOZone v3.420 (compiled for 64-bit mode, build: freebsd) for these benchmarks. I disabled SWAP in the FreeNAS settings prior to creating any test pool, in an effort to prevent arbitary i/o from skewing results. I also disabled the ‘atime’ property on the test pool, to reduce unnecessary I/O. The benchmarks were run inside a 64-bit portjail, nullfs-mounting the test pool to /mnt inside the jail. SSHD is the only started non-default service. The jail was running on a ZFS pool consisting of a SSD mirror.

Continue reading

Meet HOMER the file server

HOMER

This is HOMER. His full name is Heavy Overkill of Mandatory Expectations and Requirements. His task is to store all the household files, including those of Sideshow Bob (my ESXi server) using a combination of iSCSI, NFS and CIFS. Once in production, he’ll be running FreeNAS.

With the  SuperChassis 826BE16-R920LPB 2U storage chassis, he’s smaller than expected on the outside. But it’s the inside which counts:

 
 

Homer Dissection

These are Homers internal components. If you want a really good look, you should click the image to see it in full resolution.
MainboardSupermicro motherboard X9SRL-F (Single Xeon E5 6xSATA 8xDIMM LGA2011)
CPUIntel Xeon E5-1620 v2 (4-core, 3.7GHz)
RAM: SM Hynix 16GB DDR3-1866 2Rx4 ECC REG DIMM x4 (64 GB total)
HBA: Supermicro PCIe SAS 6Gbps contr, 8int, IR, RAID 0, 1 & 1E, LSI2308
Extra NIC: Supermicro PCI-e 2-port Intel i350 Gigabit Ethernet LAN card, Low-Pro
SSD: 2x Samsung SSD SM843 Series 2.5″ 120GB SATA 6Gbps
HDD: 12x Western Digital RE Enterprise 2TB, SATA3 (comparison chart)

 

Homer I/O ports

This is the back side of Homer’s chassis. There’s a redundant PSU, PS/2 ports for mouse & keyboard, 100Mbit/s ethernet port dedicated for the IPMI KVM over LAN feature, a serial port, VGA port, and two 1gbps onboard ethernet ports.

This picture shows a temporary hook-up of the server in a new rack. It’ll be much prettier once I get a hold of properly sized cables and do some cable management! But that’s for another post.

 

I’m currently in the process of doing many performance tests on him, which will be posted separately as they complete. Stay tuned!

ZFS: An explanation of different pool layouts

This post contains some examples and short descriptions of different ZFS pool layouts. I’m assuming you’re already familiar with the features of ZFS. If not, you may want to check out the Wikipedia article on ZFS.

General information

It’s recommended to never have more than 9 drives in a single vdev, as this will have a noticeable performance impact, especially when resilvering. Resilvering may become so slow that it’s likely you’ll lose additional drives while the process is running, potentially causing data loss. It’s therefore recommended to use multiple vdevs in the same pool when you want to use more than 9 drives.

Continue reading