[How-To] Previous History on Windows clients with Samba and ZFS Snapshots

This post describes my final configuration for making the “Previous Versions” fileshare feature work for Windows 10 clients connecting to a FreeBSD Samba server backed by ZFS, as well as how I got there. It involved reading through Samba documentation, code, and various posts on the internet, of which this mailing list entry was the most helpful, to figure out why things didn’t work as expected/documented, and how to circumvent that.

I’m using the Samba VFS module shadow_copy2 to achieve this, and have some observations about this module:

  • It has an unintuitive shadow:delimiter paramter – the contents of this parameter has to be at the beginning of the shadow:format parameter. It cannot be unset/set to blank, and it defaults to “_GMT“.
  • The shadow:delimiter parameter cannot start with a literal “-“, and the value of the parameter can’t be quoted or escaped. As an example, “\-” or “-” doesn’t work, whether they are quoted or not in the config.
  • According to documentation, shadow:snapprefix supposedly only supports “Basic Regular Expression (BRE)”. Although this module supports something which looks like simplified regular expression, it does not seem to support BRE. It also requires all special regular expression characters like ({}) (and possibly others) to be escaped with a backslash(\) character, even though the characters are part of the regex pattern and not to be treated as a literal character. This is not how regexes usually work.
  • I have not been successfull in using the circumflex/”not”(^) operator, the valid characters (“[]”) operator, the “0 or more” (*), nor the “one or more” (+) operators in regex here.

As such, I had to tweak my snapshot names to have a common ending sequence. The path of least resistance here was to make them all end on “ly” – such as “frequently“, “daily” etc. I also had to spell out every possible snapshot name in the regex.

Working smb.conf

I use “sysutils/zfstools” to create and manage snapshots on my FreeBSD file server, and I have configured it to store snapshots with date/time in UTC. As such, all snapshots are named in the pattern “zfs-auto-snap_(name_of_job)-%Y-%m-%d-%Hh%MU“. As an example, the “daily” snapshot created at 00:07 UTC on 28th December, 2020 is named “zfs-auto-snap_daily-2020-12-28-00h07U“.

[global]
# Previous History stuff
vfs objects = shadow_copy2
shadow:snapdir = .zfs/snapshot
shadow:localtime = false
shadow:snapprefix = ^zfs-auto-snap_(frequent){0,1}(hour){0,1}(dai){0,1}(week){0,1}(month){0,1}$
# shadow:format must begin with the value of shadow:delimiter, 
# and shadow:delimiter cannot start with -
shadow:delimiter = ly-
shadow:format = ly-%Y-%m-%d-%Hh%MU

Example Result

Example output, with a file server in UTC and client in UTC+0100:

Updates to this blog post

  • 2020-12-29
    • Removed incorrect “+” in vfs objects directive.
    • Moved a paragraph from the ingress to the smb.conf subheading

FreeBSD+ZFS Without Drives

This weekend I decided to set up a single-drive FreeBSD+ZFS system, and prove that you CAN remove (and replace) the only drive in a root ZFS pool without service interruption.

Recap of the video

  • Prequisite: A standard FreeBSD 12.0 install with root on ZFS, where the root pool is smaller than the amount of system memory. (in my case: 8GB system memory, 4GB root pool)
  • Replace existing drive with a memory-backed block device
  • Physically remove the existing drive
  • Verify the system still works
  • Physically attach new drive to system
  • Replace memory-backed block device with new drive
  • Bonus: Reboot system, verify it boots from new drive.

Scripts used in the demonstration are provided as a GitHub Gist.

FreeBSD: Semi-manual ZFS+UEFI installation

This post will show how to install and update a FreeBSD ZFS+UEFI installation.

Start the installer normally, and go through the steps. When you get to the part where it asks whether you want to install to UFS, ZFS, etc, chose to open a shell.

Create the partition scheme for each drive you will be using in your root zpool, and make sure to use unique labels. Make sure to replace ‘ada0’ with whatever is appropriate for you.
gpart create -s gpt ada0
gpart add -t efi -s 800k ada0
gpart add -t freebsd-zfs -a 128m -l YourLabel ada0

I aligned the freebsd-zfs partition to 128MiB to ensure it’s 4k aligned, and to leave room for boot loader changes. 

Create the zpool and add datasets, then exit the shell. The datasets for /usr and /var are not mounted, while their child datasets are mounted. This is because most of the data in /usr and /var belongs in the boot environment. Some of the subpaths have their own datasets because they should, in my opinion, be shared among boot environments.

zpool create -m none -o altroot=/mnt -O atime=off -O compress=lz4 sys gpt/YourLabel

zfs create -o canmount=off sys/ROOT
zfs create -o mountpoint=/ -o canmount=noauto sys/ROOT/default
zfs mount sys/ROOT/default
zfs create -o mountpoint=/var -o canmount=off -o compress=gzip-9 -o setuid=off -o exec=off sys/var
zfs create sys/var/audit
zfs create sys/var/log
zfs create -o atime=on sys/var/mail
zfs create -o atime=on sys/var/spool
zfs create -o exec=on sys/var/tmp

zfs create -o mountpoint=/usr -o canmount=off sys/usr
zfs create -o compress=gzip-9 sys/usr/src
zfs create sys/usr/obj

zfs create -o canmount=off sys/data
zfs create -o mountpoint=/usr/home -o setuid=off sys/data/homedirs
zfs create -o mountpoint=/root sys/data/root

zpool set bootfs=sys/ROOT/default sys
exit

Now the installer should continue doing its thing. Do what you’d normally do, but when it asks if you want to open a shell into the new environment, say yes.

Execute this commands to ensure ZFS works as expected:
echo 'opensolaris_load="yes"' >> /boot/loader.conf
echo 'zfs_load="yes" >> /boot/loader.conf
echo 'zfs_enable="YES"' >> /etc/rc.conf

Configure the UEFI partitions by doing the following for each drive that is a member of the ‘sys’ zpool: (remember to replace ‘ada0’ with whatever is appropriate for you)
dd if=/boot/boot1.efifat of=/dev/ada0p1

When upgrading FreeBSD, re-run the above command to apply new bootcode *after* having run installworld.

Venus: Semi-Manual FreeBSD 11-CURRENT AMD64 ZFS+UEFI Installation

In this post I’ll be describing how to do a semi-manual installation of a FreeBSD 11 ZFS system with UEFI boot. Big thanks to Ganael Laplanche for this mailing list entry, as it was of great help. Some things have changed since then which makes the process a little simpler, and that’s why I’m writing this. :) I’ll also include some steps I consider best practices.

The steps outlined below are generalized from how I installed FreeBSD on my dev box named Venus.

As I’m writing this, the latest FreeBSD 11 snapshot is of r294912 (2016-01-27), and does not yet support automatic installation to ZFS on UEFI systems. I’m using this snapshot for installing the system.

Start the installer normally, and go through the steps. When you get to the part where it asks whether you want to install to UFS, ZFS, etc, chose to open a shell.

Create the partition scheme for each drive you will be using in your root zpool, and make sure to use unique labels. Make sure to replace ‘ada0’ with whatever is appropriate for you.
gpart create -s gpt ada0
gpart add -t efi -s 800k ada0
gpart add -t freebsd-zfs -a 1m -s 55g -l YourLabel ada0

I aligned the freebsd-zfs partition to 1M to ensure it’s 4k aligned, and to leave room for boot loader changes. I specified a 55GB partition because my SATADOM’s are 64GB, and I want to leave some free space in case I need to replace one of them with another which isn’t the exact same size, and because I want to leave some room for other things such as a future log, cache or swap partition.

Create the zpool and add datasets, then exit the shell. All datasets within sys/ROOT/default are optional.
zpool create -m none -o altroot=/mnt -O atime=off -O checksum=fletcher4 -O compress=lz4 sys gpt/YourLabel
zpool set bootfs=sys/ROOT/default sys
zfs create -p sys/ROOT/default/var
zfs create -o compress=gzip-9 -o setuid=off sys/ROOT/default/var/log
zfs create -o compress=gzip-9 -o setuid=off sys/ROOT/default/var/tmp
zfs create sys/ROOT/default/usr
zfs create -o compress=gzip-9 sys/ROOT/default/usr/src
zfs create sys/ROOT/default/usr/obj
zfs create sys/ROOT/default/usr/local
zfs create sys/data
zfs create -o mountpoint=/usr/home -o setuid=off sys/data/homedirs
zfs mount -a
exit

Now the installer should continue doing its thing. Do what you’d normally do, but when it asks if you want to open a shell into the new environment, say yes.

Execute this commands to ensure ZFS mounts all datasets on boot:
echo 'zfs_enable="YES"' >> /etc/rc.conf

Configure the (U)EFI partitions by doing the following for each drive that is a member of the ‘sys’ zpool: (remember to replace ‘ada0’ with whatever is appropriate for you)
mkdir /mnt/ada0
newfs_msdos ada0p1
mount -t msdosfs /dev/ada0p1 /mnt/ada0
mkdir -p /mnt/ada0/efi/boot
cp /boot/boot1.efi /mnt/ada0/efi/boot/BOOTx64.efi
mkdir -p /mnt/ada0/boot
cat > /mnt/ada0/boot/loader.rc << EOF
unload
set currdev=zfs:sys/ROOT/default:
load boot/kernel/kernel
load boot/kernel/zfs.ko
autoboot
EOF

At this time you can double check you have the expected file hierarchy in /mnt/ada0:

(cd /mnt/ada0 && find .)

Should output:
.
./efi
./efi/boot
./efi/boot/BOOTx64.efi
./boot
./boot/loader.rc

Now, if you had more than one drive, you can just copy the contents of /mnt/ada0 to the appropriate mountpoints. cp -R /mnt/ada0/ /mnt/ada1/

Remember to unmount the EFI partitions, then exit the shell and reboot into the new system. :)

Once you’re in the new system, you should create a read-only ZFS dataset for /var/empty.

PS: Similar to how you need to re-apply bootcode when upgrading zpool version, you should probably re-copy /boot/loader.efi to the EFI partition as ./efi/boot/BOOTx64.efi. I am not sure if this is strictly necessary… But it shouldn’t hurt. :) I’ll update this paragraph when I get a confirmation one way or the other.

A farewell to the old, and a hello to the new

It’s late August, which means it’s currently late summer, and soon fall, where I live. It’s time to decide on a winter project. Previously, I’ve had coding sprees on various things related to the MMORPG Anarchy Online. Some examples, in no particular order: AOItems (an items database), Towerwar Tracker (mobile) (shows Player versus Player land control fights), PlanetMap Viewer, and more. Although I have to admit some of those stretched into year-around and even multi-year projects. :) In idle moments, I’ve taken stabs at improving things in the FreeBSD land as best as I can, by writing some guides and giving a helping hand on various social media, forums and IRC.

Anarchy Online has been the main focus point of many of my IT-related hobby projects. And much of my home infrastructure, powered by FreeBSD, is the way it is now because it had to be that way to support and run those projects. I started playing the game in December 2004, and stopped playing it early 2009. I have played it a week here or there when large patches have hit, but for the most part, just logged in to check if my AO-related tools were still working, or in some other way related to improving those tools. It’s now August 2015, and it’s six years since I stopped playing that game. I’ve been making tools for the game for more than ten years, and it has been fun. It has been an incredible learning experience. I’ve done many crazy things, such as:

  • Making a multi-process map compiler using PHP and syncing state on disk (4 processes was about 50% faster than one)
  • Porting said compiler to C#.Net, heavily optimized with threads. About 40x faster than single-process PHP version :)
  • Creating a LARGE toolchain (20 libraries/applications) for extracting item info from the game client and making sense out of it, storing it in a DB and displaying it on a website using PHP. Toolchain parses through 14 years of patch history in about 15 minutes. (similar tools were said to be taking weeks to do the same). This toolchain also verifies integrity, and automatically creates reports on human-induced mistakes in the data, for easy and detailed report submission to the games developers.
  • For the planetmap viewer, using a hooking library to run my own code in the context of the game client, effectively using the game client as an API.
  • Much more :)

If it wasn’t for this game, its awesome community, and all the related projects I’ve worked on, I wouldn’t be where I am today. I wouldn’t have known C#.NET as well as I do, if at all. There are many friends I would never have made otherwise, all over the world. If you ever read this, you should know who you are. :)

All that being said, 10 years is a long time. It’s a third of my life on this planet. It was fun, it was quite the experience. It really was. But now it’s time to change focus. I will maintain the existing projects for the foreseeable future, until a suitable successor can be found. No new features are likely to arrive. All the tools except those related to AOItems.com are open source, so anyone who’s up to the task can fork and improve them. I’ll keep AOItems updated for the foreseeable future. If I ever stop maintaining it, I will make the tools available to the community, so that others can pick up the torch where I left it. People can do amazing things if you let them. :)

To the whole Anarchy Online community, who know me as “Demoder”: Thank you for being truly awesome, and inspiring me to do ‘crazy’ things. I’ll try not to be a stranger!

The New

I plan on learning to play the (musical) keyboard. Exercise more. Be more out-going. Maybe quit a bad habit or two. All of those things that people write on their blogs, and sometimes follow up on. And often don’t. But there’s more.

I was introduced to the world of FreeBSD and Linux back in 2001 or so. The story is long and for another blog post, but the point is this: FreeBSD has been with me in varying degrees for nearly 15 years, or half my life, and I’m now at a point in life where I feel like contributing more than I have in the past. I started earlier this year, submitting a PR and a patch for adding LibXo support to iscsictl(8), and then proceeding with a thorough technical review of the book “FreeBSD Mastery: ZFS” by Michael W. Lucas and Allan Jude. And I feel these are the kinds of things I want to do with the large chunk of my spare time which is labeled “geeky things”. Make contributions to something, hopefully making a positive change for other people in the process.

I’ve been using jails for a long time. For the Linux folks out there, think containers. For the Solaris people out there, think zones. I love how easy it is to manage large amounts of data with ZFS, and how trouble-free it is to share this data with the jail environments, with (nearly) no overhead.

The past couple of years, some awesome people have been implementing a native hypervisor to FreeBSD called “Bhyve”. I love it. I really do. It lets me do things that jails wouldn’t. It will eventually let me retire my VMWare ESXi server. There are a few things which are annoying with it though, but most of those are being worked on by very skilled people. The one feature I miss the most,  is an easy and cheap way to leverage ZFS with no performance penalty. Think of jails with nullfs, or having a ZFS dataset delegated to a jail. That’s super awesome.

My primary use case would be to set up a virtualized file server, leveraging ZFS on the host without ZFS on ZFS or similar overhead, and avoiding the growing complexity of things like NFS configurations. As such, my new winter project is FreeBSD-related. It involves Bhyve (hypervisor), file systems, host/vm communication, and simplifying administration. It involves learning C properly, virtio, FUSE, and FreeBSD kernel internals. Its name is Tunnel File System (https://tunnelfs.io/). The goal is to share files between host and guest like you do between host and jails using nullfs. It aims to be simple to configure/use by the system administrator. It aims to be predictable. It aims to make your life simpler. You can read about it in more detail on the project’s site.

I have a pretty good idea (or so i think right now!) on how to implement it. I know what I need to learn. I know I have a LOT to learn. But that’s okay. I LOVE learning. Almost anything fun in life involves learning something new. So I’ll get started on that. And I’ll probably find out that I can’t do these things the way I wanted, but that’s okay too. It’s a learning experience, and I will get to the finish line eventually. :)

PS: If you haven’t read FreeBSD Mastery: ZFS, and use or plan to use ZFS, you should go read it. Even if you’re not using FreeBSD. If you don’t own it, you should buy it. It’s awesome, and it will look good on your book shelf.

Converting a FreeBSD MySQL server to jail host with MySQL in jail

I have a FreeBSD 10.0 server which currently only runs Percona MySQL server 5.6 backed by ZFS. The SQL server doesn’t have a high enough load to justify dedicated hardware, but I also don’t want to run it as a virtual machine as I want to use local ZFS storage, and because of virtualization overhead. The server is dual-homed (DMZ and LAN).

The solution is to convert the server into a jail host, and run MySQL inside a jail. The overhead should be minimal to non-existing as I won’t be using VNET.
Continue reading

FreeBSD jail server with ZFS clone and jail.conf

I’ll be using FreeBSD 10.0 AMD64 with root on ZFS, but you can follow these instructions as long as you have a ZFS pool on the system. It is assumed that the system is already installed and basic configuration is complete.

It should be noted that the benefit from using ZFS clones will more or less vanish if you do a major ‘world’ upgrade on the jail, for example upgrading from FreeBSD 9.2 to FreeBSD 10.0. This won’t be a problem for my setup as I’ll eventually get around to configuring sysutils/py-salt to automatically deploy my jails, and I’ll post about it when I do.
Continue reading