I have spread story of this journey across a series of posts:
- Part 0: A Questionable Idea
- Part 1: Switching Personalities
- Part 2: The Export/Import Business
- Part 3: No Special Snowflakes ⇐ you are here
But does it reproduce?
My co-workers know me as a person who likes command lines, and whose definition of a “one-liner” may be a bit…expansive at times. The challenge for me, therefore, is to replicate the results in a slightly different environment, with fewer frills, with fewer graphical installs, and more typing. I chose to replace Ubuntu 22.04 LTS “Jammy Jellyfish” with Debian 11 “Bullseye,” selecting only the most basic options, to see if it would work as easily. (I’m keeping FreeBSD in every iteration of this experiment, thank you very much!)
In particular, the Debian install media offers no distinction between a server and a desktop. You get the features you ask for and you don’t get the features you don’t.
A new machine part 1: Debian
I created a new virtual machine that had the same shape and size, but with fresh disks of its own:
- 8 GB of memory
- 2 virtual CPUs
- an ethernet network interface
- a 40 GB SCSI hard drive for operating systems
- a 20 GB SCSI hard drive for the shared data
- no sound card
- no camera
- UEFI firmware
I ran through the Debian installer in a fairly straightforward form, and manually chose a set of disk partitions that consumed approximately half the disk. I planned them out to look like this:
Index | Size | Filesystem | Mount point | Name | Purpose |
---|---|---|---|---|---|
1 | 1 GB | EFI | (automatic) | efi | EFI system partition |
2 | 2 GB | ext4 | /boot | linux-boot | Linux boot |
3 | 18 GB | linux-lvm | see below | linux-lvm | Linux LVM |
4 | 2 GB | swap | (none) | swap | Swap |
Within the LVM partition /dev/sda3
I created:
- One single volume group
vg0
, consuming as much as possible; - One single logical volume
lv0
, consuming as much as possible, mounted at/
.
The rest of the disk would be consumed by FreeBSD.
I had brilliantly1 declined to install the common system utilities. When I finally rebooted into this fresh system, I had to use the su utility and a root password – much like UNIX system administrators of yore – to reach a tolerable setup where I could use sudo and a screen-oriented text editor.2 But after that brief ordeal, it was time to install the ZFS packages via the OpenZFS project’s getting started guide for Debian. Examining the system with available text-oriented tools, I saw the following:
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 40G 0 disk
|-sda1 8:1 0 953M 0 part /boot/efi
|-sda2 8:2 0 1.9G 0 part /boot
|-sda3 8:3 0 16.8G 0 part
| `-vg0-lv0 254:0 0 16.8G 0 lvm /
`-sda4 8:4 0 1.9G 0 part [SWAP]
sdb 8:16 0 20G 0 disk
|-sdb1 8:17 0 20G 0 part
`-sdb9 8:25 0 8M 0 part
sr0 11:0 1 1024M 0 rom
This looks like a reasonable arrangement of block storage devices. What can it tell us about the partition table?
$ sudo partx -s /dev/sda
NR START END SECTORS SIZE NAME UUID
1 2048 1953791 1951744 953M efi ...
2 1953792 5859327 3905536 1.9G linux-boot ...
3 5859328 41015295 35155968 16.8G linux-lvm ...
4 41015296 44920831 3905536 1.9G swap ...
That also looks good.
I created the zdata
storage pool on sdb1
and the zdata/home
dataset within it:
zpool create zdata /dev/sdb
Examining the partition table on sdb
:
$ sudo partx -s /dev/sdb
NR START END SECTORS SIZE NAME UUID
1 2048 41924607 41922560 20G zfs-be42e62def1bd6ad ...
9 41924608 41940991 16384 8M ...
It was consistent with what we saw before on the Ubuntu machine.
$ zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
zdata 19.5G 184K 19.5G - - 0% 0% 1.00x ONLINE -
So I created a dataset and proved it was what I wanted:
zfs create -o mountpoint=/zhome zdata/home
$ zfs list
NAME USED AVAIL REFER MOUNTPOINT
zdata 184K 18.9G 24K /zdata
zdata/home 24.5K 18.9G 24.5K /zhome
The data sets were mounted as well.
$ df
Filesystem Size Used Avail Use% Mounted on
udev 3.9G 0 3.9G 0% /dev
tmpfs 796M 660K 796M 1% /run
/dev/mapper/vg0-lv0 17G 1.6G 16G 10% /
tmpfs 3.9G 0 3.9G 0% /dev/shm
tmpfs 3.0M 0 5.0M 0% /run/lock
/dev/sda2 1.8G 50M 1.7G 3% /boot
/dev/sda1 952M 5.8M 946M 1% /boot/efi
zdata 19G 128K 19G 1% /zdata
zdata/home 19G 128K 19G 1% /zhome
I got lucky with one of the choices that Debian made:
$ mount | grep /boot/efi
/dev/sda1 on /boot/efi type vfat (rw,relatime,fmask=0077,dmask=0077,codepage=437,iocharset=ascii,shortname=mixed,utf8,errors=remount-ro)
What Debian calls vfat
FreeBSD calls msdosfs
, which can be mounted and written natively by the operating system, without adding any external packages. So hopefully we won’t have to engage in a manual step to get the FreeBSD boot loader executable in place.
I proceeded to copy files from the Ubuntu+FreeBSD machine over the network so I could install the scripts and systemd units without typing them all over again.
The GRUB setup was much friendlier on Debian, allowing a five-second view of the menu before proceeding. I changed it to 30 seconds to match what I had previously.
sed -i -e '/GRUB_TIMEOUT=/s/=.*/=30/' /etc/default/grub
update-grub
Time to attend to the other half of the machine.
A new machine part 2: FreeBSD
I only needed to add one partition to the GPT via the FreeBSD installer:
gpart add -t freebsd-zfs -l freebsd-zfs da0
And that would be dedicated to the operating system. I created a storage pool within this partition and the approximately-standard group of datasets. (It’s a long script but not presented here.) This time we could add /boot/efi
to /etc/fstab
in addition to the designated swap area before we let the installer have its way.
cat >>/tmp/bsdinstall_etc/fstab <<EOF
/dev/da0p1 /boot/efi msdosfs rw,sync,noatime,-m=600,-M=700 2 2
/dev/da0p4 none swap sw 0 0
EOF
When adding users to the system, I chose my UID to match what Debian had gave me (1000).
After the install, the system rebooted immediately into FreeBSD. Which was not bad but not what I expected.
A new machine part 3: The Reluctant GRUB
Messing with the partition table didn’t help. It was booting off the correct partition already, the EFI file system. The FreeBSD installer had noticed that /boot/efi
was writeable, so it dropped its own EFI boot loader into the key position of EFI/boot/bootx64.efi
. How did I discover this? Mostly by comparing file lengths of the files within that partition:
find /boot/efi -type f -iname '*.efi' -ls | sort -k7 -n
To remind myself how to fix the situation, I referred to the previous experiment with Ubuntu and examined its /boot/efi
file system, before settling on the following procedure:
cp /boot/efi/EFI/debian/shimx64.efi /boot/efi/EFI/boot/bootx64.efi
cp /boot/efi/EFI/debian/fbx64.efi /boot/efi/EFI/boot/
cp /boot/efi/EFI/debian/mmx64.efi /boot/efi/EFI/boot/
And after a reboot I was indeed presented with GRUB. So I booted back into FreeBSD and copied the FreeBSD-related files from the other machine to install them.
After a few reboots back and forth I found that I had indeed reproduced the setup; the zdata
pool imported properly every time, and the datasets within it mounted at the desired locations.
Putting the lessons to use
I don’t think I have anything on the hobby PC that strictly relies upon Ubuntu being Ubuntu. It does make certain applications easier to obtain, but all the applications I care about for the hobby are generally Linux-friendly, so changing out Ubuntu for Debian seems plausible. I might even get some more fine-grained control over how the resulting machine looks.
I have a 500 GB USB SSD lying around, not seeing a lot of use. Perhaps I could create a ZFS storage pool on it, back up the existing hobby PC to it, and use that as a starting point for a rebuild.
Final products
I have stored the various artifacts that came out of this experiment in a repository.
Reward yourself with a festive beverage for reading this far!