I have spread story of this journey across a series of posts:
- Part 0: A Questionable Idea
- Part 1: Switching Personalities
- Part 2: The Export/Import Business ⇐ you are here
- Part 3: No Special Snowflakes
Now back to the important stuff
So I need to export my chosen ZFS storage pool every time Ubuntu shuts down. As much as I prefer the FreeBSD system of initialization scripts, and regard systemd with a degree of suspicion, it is generally a good idea to work within the framework that the operating system provides until it proves inadequate. And for this purpose, it was indeed adequate. A few more web searches yielded these useful links:
Which I boiled down to this systemd service, stored in /etc/systemd/system/zpool-export.service
:
[Unit]
Description=ZFS Pool Export
Before=zfs.target
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/bin/true
ExecStop=/usr/sbin/zpool export -a -f
[Install]
WantedBy=zfs.target
It’s a blunt instrument, thanks to the -a
and -f
flags. I’ll probably have to refine it later to be more precise. And that’s assuming that it’s what I want. (I have a hunch I’m missing a detail or two.) I won’t know until I try. Time to install it and get it working.
systemctl daemon-reload
systemctl enable zpool-export.service
systemctl start zpool-export.service
Now I can reboot back into Ubuntu as many times as I want in a row and the datasets in the zdata
storage pool mount automatically. But that’s not really an accomplishment, is it? That’s what the operating system would do anyway for me. I’m not handling anything differently yet.
I have to address FreeBSD’s needs. I want to be able to boot back and forth between the two freely, and see the same data on the shared pool.
Examining the various systemd units that came with the zfsutils-linux
package,1 I saw that they were taking a two-step approach:
- import the storage pools without mounting the datasets as file systems
- mount all the ZFS datasets as file systems
I adopted the same strategy, but shoehorned it into scripts that would work well with FreeBSD’s initialization system – specifically with the library /etc/rc.subr
that can make writing these scripts easier.
First, I wrote a script to import the storage pools from certain devices but not mount them when its service “starts,” and export those same storage pools when its service “stops.” I installed it as /usr/local/etc/rc.d/zpool-shared
.
Then, I wrote a script that “starts” its service by mounting the ZFS datasets from those storage pools as file systems. And do the opposite when the service “stops.” I installed it as /usr/local/etc/rc.d/zfs-shared
.
Add in a few key comments such as PROVIDE:
and REQUIRE:
so that FreeBSD can properly order the scripts and that should be it! Let’s set the key variables that trigger the desired behaviors from FreeBSD’s initialization system.
sysrc zpool_shared_enable=YES zpool_shared_devices=/dev/da1p1 zpool_shared_pools=zdata
sysrc zfs_shared_enable=YES zfs_shared_datasets=zdata
sysrc sets values in /etc/rc.conf
that are useful for configuring the system and its services.
zpool_shared_enable
and zfs_shared_enable
should be self-explanatory by their names.
zpool_shared_devices
specifies what devices to search on for storage pools. zpool_shared_pools
gives the names of the pools I expect to find. zfs_shared_datasets
lists the common prefixes of dataset names (usually the names of the storage pools that contain them) that are considered interesting for this purpose. Note this does not include the main FreeBSD storage pool which the installer traditionally names zroot
.
I booted back and forth between Ubuntu and FreeBSD, using the appropriate GRUB menu entries, and saw that the zdata
pool and its datasets were not always mounted. This would take some debugging, mostly on the Ubuntu side. It looks like my attempt at a zpool-export.service
didn’t work out so well. Time to remove it.
systemctl disable zpool-export.service
rm /etc/systemd/system/zpool-export.service
To imitate the approach that was working on the FreeBSD side, I created two systemd services, one for the storage pools and the other for the data sets. I offloaded all the logic into scripts stored in /usr/local/sbin/zpool-shared
and /usr/local/sbin/zfs-shared
respectively. Instead of reading values (indirectly) from /etc/rc.conf
they would look in /etc/default/zpool-shared
and /etc/default/zfs-shared
respectively for key variables. Aside from the specific variable names and the details of dealing with each operating system’s initialization paradigms, the main logic of the scripts for both operating systems was identical.
There were two main sources of trouble:
- systemd was trying to mount the ZFS datasets before the storage pool completed its import. Hooray for race conditions!
- The scripts were not gracefully handling the cases where the storage pools were already imported or the datasets were already mounted.
I addressed the timing problem by reading the following systemd manual pages:
In particular, proper use of Requires
, After
, and WantedBy
got me the ordering I was looking for, which is summarized here:
Unit file | Section | Ordering constraint |
---|---|---|
zpool-shared.service | Unit | Requires=zfs.target |
zpool-shared.service | Unit | After=zfs.target |
zpool-shared.service | Install | Requires=zpool-shared.service |
zfs-shared.service | Unit | Requires=zpool-shared.target |
zfs-shared.service | Unit | After=zpool-shared.target |
zfs-shared.service | Install | WantedBy=multi-user.target |
But does it reproduce? All this work is worth approximately bupkis if nobody can reproduce it.2 I’ll try to answer that in the conclusion of the series.
You can discover the particular set of files via either of the following:
find /lib/systemd/system -type f -name 'zfs*'
↩︎dpkg-query -L zfsutils-linux | grep ^/lib/systemd/system/
This colorful Yiddish word may have originally meant beans but evolved to describe the excrement of certain ungulates. In modern usage, one of its synonyms is “the square root of bugger all.” Ungulate excrement is generally regarded as not immediately and directly useful for computing, though there may be extremely indirect applications that remain to be researched. ↩︎