[How-To] Previous History on Windows clients with Samba and ZFS Snapshots

This post describes my final configuration for making the “Previous Versions” fileshare feature work for Windows 10 clients connecting to a FreeBSD Samba server backed by ZFS, as well as how I got there. It involved reading through Samba documentation, code, and various posts on the internet, of which this mailing list entry was the most helpful, to figure out why things didn’t work as expected/documented, and how to circumvent that.

I’m using the Samba VFS module shadow_copy2 to achieve this, and have some observations about this module:

  • It has an unintuitive shadow:delimiter paramter – the contents of this parameter has to be at the beginning of the shadow:format parameter. It cannot be unset/set to blank, and it defaults to “_GMT“.
  • The shadow:delimiter parameter cannot start with a literal “-“, and the value of the parameter can’t be quoted or escaped. As an example, “\-” or “-” doesn’t work, whether they are quoted or not in the config.
  • According to documentation, shadow:snapprefix supposedly only supports “Basic Regular Expression (BRE)”. Although this module supports something which looks like simplified regular expression, it does not seem to support BRE. It also requires all special regular expression characters like ({}) (and possibly others) to be escaped with a backslash(\) character, even though the characters are part of the regex pattern and not to be treated as a literal character. This is not how regexes usually work.
  • I have not been successfull in using the circumflex/”not”(^) operator, the valid characters (“[]”) operator, the “0 or more” (*), nor the “one or more” (+) operators in regex here.

As such, I had to tweak my snapshot names to have a common ending sequence. The path of least resistance here was to make them all end on “ly” – such as “frequently“, “daily” etc. I also had to spell out every possible snapshot name in the regex.

Working smb.conf

I use “sysutils/zfstools” to create and manage snapshots on my FreeBSD file server, and I have configured it to store snapshots with date/time in UTC. As such, all snapshots are named in the pattern “zfs-auto-snap_(name_of_job)-%Y-%m-%d-%Hh%MU“. As an example, the “daily” snapshot created at 00:07 UTC on 28th December, 2020 is named “zfs-auto-snap_daily-2020-12-28-00h07U“.

# Previous History stuff
vfs objects = shadow_copy2
shadow:snapdir = .zfs/snapshot
shadow:localtime = false
shadow:snapprefix = ^zfs-auto-snap_(frequent){0,1}(hour){0,1}(dai){0,1}(week){0,1}(month){0,1}$
# shadow:format must begin with the value of shadow:delimiter, 
# and shadow:delimiter cannot start with -
shadow:delimiter = ly-
shadow:format = ly-%Y-%m-%d-%Hh%MU

Example Result

Example output, with a file server in UTC and client in UTC+0100:

Updates to this blog post

  • 2020-12-29
    • Removed incorrect “+” in vfs objects directive.
    • Moved a paragraph from the ingress to the smb.conf subheading

FreeBSD: Semi-manual ZFS+UEFI installation

This post will show how to install and update a FreeBSD ZFS+UEFI installation.

Start the installer normally, and go through the steps. When you get to the part where it asks whether you want to install to UFS, ZFS, etc, chose to open a shell.

Create the partition scheme for each drive you will be using in your root zpool, and make sure to use unique labels. Make sure to replace ‘ada0’ with whatever is appropriate for you.
gpart create -s gpt ada0
gpart add -t efi -s 800k ada0
gpart add -t freebsd-zfs -a 128m -l YourLabel ada0

I aligned the freebsd-zfs partition to 128MiB to ensure it’s 4k aligned, and to leave room for boot loader changes. 

Create the zpool and add datasets, then exit the shell. The datasets for /usr and /var are not mounted, while their child datasets are mounted. This is because most of the data in /usr and /var belongs in the boot environment. Some of the subpaths have their own datasets because they should, in my opinion, be shared among boot environments.

zpool create -m none -o altroot=/mnt -O atime=off -O compress=lz4 sys gpt/YourLabel

zfs create -o canmount=off sys/ROOT
zfs create -o mountpoint=/ -o canmount=noauto sys/ROOT/default
zfs mount sys/ROOT/default
zfs create -o mountpoint=/var -o canmount=off -o compress=gzip-9 -o setuid=off -o exec=off sys/var
zfs create sys/var/audit
zfs create sys/var/log
zfs create -o atime=on sys/var/mail
zfs create -o atime=on sys/var/spool
zfs create -o exec=on sys/var/tmp

zfs create -o mountpoint=/usr -o canmount=off sys/usr
zfs create -o compress=gzip-9 sys/usr/src
zfs create sys/usr/obj

zfs create -o canmount=off sys/data
zfs create -o mountpoint=/usr/home -o setuid=off sys/data/homedirs
zfs create -o mountpoint=/root sys/data/root

zpool set bootfs=sys/ROOT/default sys

Now the installer should continue doing its thing. Do what you’d normally do, but when it asks if you want to open a shell into the new environment, say yes.

Execute this commands to ensure ZFS works as expected:
echo 'opensolaris_load="yes"' >> /boot/loader.conf
echo 'zfs_load="yes" >> /boot/loader.conf
echo 'zfs_enable="YES"' >> /etc/rc.conf

Configure the UEFI partitions by doing the following for each drive that is a member of the ‘sys’ zpool: (remember to replace ‘ada0’ with whatever is appropriate for you)
dd if=/boot/boot1.efifat of=/dev/ada0p1

When upgrading FreeBSD, re-run the above command to apply new bootcode *after* having run installworld.

FreeBSD ZFS: Advanced format (4k) drives and you

Historically, hard drives have had a sector size of 512 bytes. This changed when drives became large enough for such a small sector size to make the overhead of keeping track of these sectors consume too much storage space, making hard drives more expensive to produce than strictly necessary. Many modern drives are tagged as “advanced format” drives; Right now, this means they have a sector size of 4096 bytes (4KiB). This includes most if not all SSDs, and most 2TB+ magnetic drives.

If you create a partition on such a drive without ensuring the partition begins on a physical sector, the device firmware will have to do some “magic” which takes more time than not doing the magic in the first place, resulting in reduced performance. It is therefore important to make sure you align partitions correctly on these devices. I generally align partitions to the 1MiB mark for the sake of being future proof. Even though my current drives have 512B and 4KiB sector sizes, I don’t want to encounter any problems when larger sector sizes are introduced.

Although ZFS can use entire devices without partitioning, I use GPT to partition and label my drives. My labels are generally reference to physical location in the server. For example, Bay1.2 would mean the drive is located in bay one slot two. This makes it so much easier to figure out which drive to replace when the need arise.

Continue reading