The tech blog of a fish

I've always been a dual-booting person; with Linux and Windows back in the day when that was still neccessary, and Linux and BSD now because there are always things that one of them does better than the other.

The downside to dual-booting is always: Space will be split.

Different operating systems have different filesystems, so even sharing data can be complicated at times, and of course, every OS needs its own partition for the root filesystem.

But what if it wasn't like that?

Meet ZFS

ZFS is a great filesystem.
Its most prominent features are checksumming of your data (to prevent silent corruption) and block-level compression, as well as a built-in volume manager, enabling things like software RAID and pooling multiple devices into one ZFS pool, all on the filesystem level.
There is also the ability to create different sub-filesystems (called "datasets") with different properties, so you can use different settings for different kinds of data.

Another great thing about ZFS is: It's everywhere.
All major operating systems have a port of it available, and all the relevant ones to me make it possible to run it as the root filesystem.

Now, I was thinking one day: What if you used all of those capabilities to the max, to get around that old problem with multi-booting OSes?

Cohabiting

Turns out, someone had already done that.

That blog article helped me a lot while setting up my first experiments on this matter, tho I decided to do some things differently in my setup.

Most of the stuff in that article still applies today; however, I still decided to write my own article on this, to bring some of my own thoughts and experience into it, and to show how you would do it with more than just two operating systems.

I will try to structure this article a bit like the older one, so readers can follow both and compare what we're doing differently.

Basic concepts

In case you don't want to read the other article and/or don't know a lot about ZFS, here is a quick overview of what I'll do and how this works.

A ZFS pool (zpool) is basically a set of devices chained together into one logical device, in different ways depending on how you configure it. I'll not go into detail on creating complicated zpool setups, I will use a single-disk one because this article is not meant for showing any of the RAID capabilities of ZFS.

A zpool can contain a hierarchy of subfilesystems (called zfs datasets). Since they can be used like they are completely different filesystems, the idea to install different operating systems on different datasets is not that far-fetched. And it works very nicely!

However, there are some caveats. This is not really something that's meant to be done, so some downsides are destined to come up.

Luckily, there's only one real "semi-problem": Mountpoints.

Mountpoints are a zfs feature where you can set the mountpoint for a dataset globally (on the filesystem level). The zfs mount daemon does then mount those at startup if it is enabled.

Sadly, that's the thing you lose when having multiple OSes on one pool, at least for the OS-specific datasets (like the root filesystem and direct subfilesystems of that).

Instead of mountpoints, a classic /etc/fstab file will be used. To keep things simple in this article, we will only be using one or two datasets per OS, to show how it works without writing a 20-line fstab.

I might write a tool to automate some of these processes at some point, but that point is curently far in the undefinable future. Until then, you will have to create your datasets and fstab files yourself.

ZFS dataset hierarchy

I will be using a fairly standard dataset hierarchy here, to stay compliant with zfs boot environment tooling, maybe even compatible (I haven't used boot envs yet, so I don't know).

The zpool used here will be called "rpool", and it will have the following datasets (the ones written in caps aren't mounted anywhere, so the mountpoint for them will be none):

  • rpool/ROOT: datasets under this one will contain operating systems.
    • rpool/ROOT/fbsd: contains the FreeBSD root filesystem
      • rpool/ROOT/fbsd/distfiles: dataset for port distfiles, to demonstrate dataset properties
    • rpool/ROOT/void: contains the Void Linux root filesystem
      • rpool/ROOT/void/distfiles: same as above (but for the XBPS package cache)
    • rpool/ROOT/hipster: contains the OpenIndiana (Hipster distribution) root filesystem
      • rpool/ROOT/hipster/var: contains OpenIndiana's data
  • rpool/HOME: datasets under this will contain user data.
    • rpool/HOME/someone: home directory for a user.

Bootloaders

For the bootloader, I have found that while booting FreeBSD directly with GRUB works, it has some downsides (for example, some parameter for the kernel location isn't set, which somehow makes freebsd-update freak out and report an error for every single file on the whole filesystem).

Because of that, I decided to instead chainload the respective loaders for both non-Linux operating systems from GRUB. Specifically, the second-stage bootloader will be loaded; it is located at /boot/zfsloader on FreeBSD, and at just /boot/loader for OpenIndiana.

One note to take: While ZFS on Linux and GRUB don't care about the zpool's bootfs property (at least after the initial GRUB install), the bootloader used by the other systems here depends on it to know where its configuration files are. So when you want to boot FreeBSD, you have to run zpool set bootfs=rpool/ROOT/fbsd rpool, and the same goes for OpenIndiana. But most people will seldom actively use more than two operating systems, so I can live with that (I'm mostly doing three to demonstrate that it's possible).

Setup steps

I will be starting the process and creating the zpool on Linux. Here is a rough overview of the involved steps:

  1. Create a Void ISO with ZFS, boot that and create the zpool and datasets
  2. Install and configure Void and GRUB and confirm everything works
  3. Boot FreeBSD and install it
  4. Boot into Void again to add a GRUB entry for FreeBSD
  5. Repeat step 3 and 4 for OpenIndiana

I will be performing these steps in a VM without UEFI to keep it simple; I will however point out how to do it on UEFI when I get to installing the bootloader.

Getting the install media

I got the install media for FreeBSD from here, and the one for OpenIndiana from here.

The versions used were Hipster 2018.10 (Minimal installer DVD iso) and FreeBSD 12.0-RELEASE (disc1) for amd64.

Void

Void is a little special here, as no Linux distribution install media contains ZFS at the moment because of legal reasons.

However, Void provides a great tool called void-mklive which can be used to create custom live ISOs really easily.
It depends on xbps, so you should use it on an already existing Void install.

The process for creating an ISO when starting from scratch is the following:

# clone the repo
git clone https://github.com/void-linux/void-mklive.git
cd void-mklive
# compile the scripts
make
# (optional) look at which options are available
./mklive.sh -h
# now, we can just create an ISO with zfs (perl is there because I had issues without it):
sudo ./mklive.sh -o void-zfs.iso -p "perl zfs" -T "Void ZFS"
# wait for it to finish and now you have a custom Void ISO!

I added some more options to the last command, such as my keyboard layout; you can look at the output of ./mklive.sh -h to see what's possible.

Let's begin! (Creating the zpool on Void Linux)

(Note: You should use the operating system you're most familiar with to create the pool. The steps will be roughly the same, but with different device names and commands, of course. And of course, inform yourself about features that might not be available on other platforms if you want this to work!)

After booting from the Void ISO and logging in as root, we first load the zfs kernel module:

modprobe zfs

The disk we will be using here is called vda by linux.

If there is already a partition table, destroy it:

shred -s 128M /dev/vda
sync

Then, partition the disk. We well be creating a zpool using a partition instead of a whole disk, which has benefits and drawbacks, but it's simple.

I like to use cfdisk on Linux; you should, of course, use the tools you're most familiar with.

cfdisk /dev/vda

I will ask you for the kind of partition table you want to create; choose gpt.

Create a partition layout looking roughly like this:

vda1: 19,9G, Type "FreeBSD ZFS"; vda2: 123M, Type "BIOS boot"

The partition type of the first partition does not really matter, but setting it to something ZFS can't hurt either. The BIOS boot partition could be smaller, but it doesn't hurt to make it bigger either.
For booting with UEFI, set the partition type of the second partition to EFI instead and make it a bit bigger, just to be sure (about 512M).

We are now ready to create the zpool!

zpool create -o ashift=9 -o cachefile= -O compression=lz4 \
> -o feature@userobj_accounting=disabled \
> -m none -R /mnt rpool /dev/vda1

Quick breakdown of the options used:

  • -o ashift=9: Set the zfs block size to 512 bytes. See here
  • -o cachefile=: Turn off the cachefile for now.
  • -O compression=lz4: Set compression for the top-level dataset. Will be inherited.
  • -o feature@userobj_accounting=disabled: Turn off a Linux-only feature. No other OS could mount the pool read/write with this enabled.
  • -m none: Set the mountpoint for the pool to none.
  • -R /mnt: Temporarily set the pool's mountpoint to /mnt.
  • rpool: The name of the pool.
  • Everything after that are the vdevs (vdevs are virtual devices that blocks are striped onto; you can use mirrors, RAIDZ volumes and single devices as vdevs. We are only using one device here, /dev/vda1.)

Note that for an actual production system, I would not rely on /dev/sdX, /dev/vdX or something like that; I would go by /dev/disk/by-id/<id>. That way, the pool will still work if the order of the disks attached changes, but if you do it that way, you have to ln -s /dev/disk/by-id/<id> /dev/ because GRUB is stupid and will not find your boot device otherwise.

If you intend to use EFI, now would be a good moment to create the filesystem on the EFI partition:

mkfs.vfat /dev/vda2

Running zpool list and zfs list now should show something like this:

[root@void-live ~]# zpool list
NAME    SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
rpool  19,8G   285K  19,7G         -     0%     0%  1.00x  ONLINE  /mnt
[root@void-live ~]# zfs list
NAME    USED  AVAIL  REFER  MOUNTPOINT
rpool  94,5K  19,1G    24K  none

We now create the datasets:

zfs create rpool/ROOT
zfs create -o mountpoint=legacy rpool/ROOT/fbsd
zfs create -o mountpoint=legacy rpool/ROOT/void
# The mountpoint=legacy will be inherited
zfs create -o compression=off rpool/ROOT/fbsd/distfiles
zfs create -o compression=off rpool/ROOT/void/distfiles
# We don't create rpool/ROOT/hipster, that will be done by the OI installer
zfs create rpool/HOME
# We use mountpoint here because this dataset should always be mounted
zfs create -o mountpoint=/home/someone rpool/HOME/someone

And then check the result:

[root@void-live ~]# zfs list
NAME                        USED  AVAIL  REFER  MOUNTPOINT
rpool                       357K  19,1G    24K  none
rpool/HOME                   48K  19,1G    24K  none
rpool/HOME/someone           24K  19,1G    24K  /mnt/home/someone
rpool/ROOT                  144K  19,1G    24K  none
rpool/ROOT/fbsd              48K  19,1G    24K  legacy
rpool/ROOT/fbsd/distfiles    24K  19,1G    24K  legacy
rpool/ROOT/void              48K  19,1G    24K  legacy
rpool/ROOT/void/distfiles    24K  19,1G    24K  legacy

Looking good so far!

I will now procede to install Void.

Installing Void Linux

(Note: This section is very long, as installing Void is a bit more complicated than the other systems; mostly because Void doesn't have a usable installer that's fully compatible with ZFS. I might provide a guide for starting on FreeBSD at some point, as installing GRUB isn't much harder there.)

The install I will perform here is a fairly normal Void bootstrap install; however, I did some things in a fairly custom way, such as not using the normal base-system package. The reason is that I want a specific LTS kernel instead of the default (which is always the newest kernel available).
I will first show the normal process, and then how I did it instead.

Step 1 - Initial bootstrap

Because of the legacy mountpoints, we need to mount the root dataset manually:

# unmount the home dataset for now
zfs umount -a
# mount the used filesystems
mount -t zfs rpool/ROOT/void /mnt
mkdir -p /mnt/var/cache/xbps
mount -t zfs rpool/ROOT/void/distfiles /mnt/var/cache/xbps
# and remount the home dataset
zfs mount rpool/HOME/someone

Then, we bootstrap the Void base-system:

xbps-install -S -R https://alpha.de.repo.voidlinux.org/current -r /mnt base-system grub perl

Quick breakdown of the options used:

  • -S: Sync the repo first.
  • -R <url>: Set the repo used to <url>.
  • -r /mnt: Set the root path to /mnt.
  • base-system grub perl: The packages you want to install.

We'll install zfs in the next step, because in my experience, it works better after installing perl.

These are the packages that I used in my bootstrap command, to use Linux 4.14 instead of the newest kernel:

base-voidstrap grub perl acpid ethtool libgcc usbutils wifi-firmware wpa_supplicant linux4.14 linux4.14-headers linux-firmware-network linux-firmware-intel dracut cpio

Explanation: base-voidstrap is the package normally used for small Void bootstraps, e.g. as base for containers, so it doesn't pull in the linux package, which would install the default kernel.

The rest of the packages (acpid to wpa_supplicant) are the packages that base-system depends on in addition to the base-voidstrap packages, and everything after that is kernel-related stuff for Linux 4.14.

After running the xbps-install command, you have to accept the repository GPG key and wait for the install process to finish. It shouldn't take too long.

Step 2 - chroot

Now mount the neccessary filesystems for the chroot:

mount -B /dev /mnt/dev
mount -B /dev/pts /mnt/dev/pts
mount -t sysfs sys /mnt/sys
mount -t proc proc /mnt/proc

And copy /etc/resolv.conf to enable name resolution:

cp /etc/resolv.conf /mnt/etc/

If you're using EFI, mount that partition as well:

mkdir /mnt/boot/efi
mount /dev/vda2 /mnt/boot/efi

And chroot into the new system:

chroot /mnt bash -l

Some basic configuration steps can now be performed; for example, setting a hostname in /etc/hostname, changing some settings in /etc/rc.conf or configuring your glibc locales (given that you're using glibc). It might also be a good idea to set a root password now.

Now is also the right time to install zfs:

xbps-install -S zfs

This will also install headers for the newest kernel, but not the kernel itself, so that's fine. The install process might take some time because the kernel module is being compiled.

Give the Void install a hostid; this is neccessary to be able to mount the pool at boot without forcing it. You can write whatever you want to this file, most people use the MAC address of their first network interface. I will keep it simple here, but you should probably choose a somewhat unique number.

echo 12345 > /etc/hostid

Set the zpool's cachefile property:

zpool set cachefile=/etc/zfs/zpool.cache rpool

Edit /etc/fstab and add the following lines to it:

rpool/ROOT/void           /               zfs defaults    0    0
rpool/ROOT/void/distfiles /var/cache/xbps zfs defaults    0    0

And after confirming that our /home/someone dataset is actually mounted, create that user:

useradd -G wheel,network,video,input,lp someone
cp /etc/skel/.* /home/someone/
chown -R someone:someone /home/someone
passwd someone
Installing GRUB

First of all, configure GRUB with reasonable defaults for ZFS on Linux (in /etc/default/grub). You can leave it mostly as-is; however, it is recommended to add elevator=noop (to reduce the overhead, as ZFS does its own optimizations) and noresume (because you can't hibernate on ZFS) to GRUB_CMDLINE_LINUX_DEFAULT.

Then, run:

grub-probe /

If it outputs zfs, you're good to go; otherwise, you probably used a device id when creating your zpool.
In that case, you have to link the device id to /dev because GRUB is dumb:

ln -s /dev/disk/by-id/<id> /dev/

After that, it grub-probe should have the correct output.

You will also need to set the bootfs property of your zpool to the dataset you're installing the GRUB config to:

zpool set bootfs=rpool/ROOT/void rpool

The final install command differs depending on you want to use EFI or not.

For BIOS:

grub-install --target=i386-pc /dev/<device>

For UEFI:

grub-install --target=x86_64-efi --efi-directory=/boot/efi/ --bootloader-id=GRUB

More information can be found here.

Completing the setup

Finally, force-reconfigure your kernel; this will add the zfs module to your initramfs, and it will also generate the initial GRUB config file.

xbps-reconfigure -f linux4.14

We should be ready to boot the system now!

exit out of the chroot and unmount the filesystems:

exit
umount -R /mnt
zfs umount -a

Export the pool:

zpool export rpool

And reboot!

reboot

(I actually don't run reboot here, but poweroff instead because I need to remove the virtual install media before continuing.)

After booting the freshly installed system, it should look something like this:

A fully booted Void Linux

Log in as root and and enable network services:

ln -s /etc/sv/{dhcpcd,sshd} /var/service/

You might need to mount the zfs filesystems manually after the first boot; I think this is because Void does not actually use the common service structure for zfs, but instead relies on zpool.cache for storing mounts.

zfs mount -a

Confirm that GRUB is working and updatable:

update-grub

If that doesn't work, recreate your device symlinks; that only has to be done once and should not be neccessary in the future. (See Installing GRUB.)

Void is now installed! Let's go on to FreeBSD.

Installing FreeBSD

Installing FreeBSD (and also OpenIndiana) should be a much simpler process than installing Void; that is mostly because they come with ZFS in the kernel, and also because they come with usable installers (that work with ZFS, of course).

The actual install

After booting the install disk, choose "Install" and adjust the options to your liking on the following screens; I only chose lib32 from the components to install, again to keep the setup simple.

When you get to the partitioning section, choose Shell, as we will mount the filesystems ourselves.

Getting straight to it, list the available pools:

zpool import

You will see that you have to force-import the pool because it was previously in use by our other OS; I have not yet found a way to have the same hostid on FreeBSD and Linux, sadly. The good news is that FreeBSD doesn't care about the pool being imported by Linux after the first; but you have to force import it on the Linux side whenever you boot a different OS.

Force import with /mnt as the mountpoint and mount datasets:

zpool import -fR /mnt rpool
mount -t zfs rpool/ROOT/fbsd /mnt
zfs mount -a

We don't need to mount the distfiles dataset yet, as we're not installing the ports collection via the FreeBSD installer.

This is it - exit and let the installer run!

After the base system has been extracted, follow the rest of the installer and configure everything to your liking.

Choose to add a user; I used the following settings:

Username - someone; Password - (hidden); Full Name - Someone; Uid - 1000; groups - someone wheel; Home - /home/someone; Shell - /bin/tcsh; Locked - no

I manually set the Uid to 1000, as most Linux distributions configure the first user with that id. You can, of course, use whatever you want - the user should just have the same id on all platforms.

Final touches

After you exit the installer, you'll be asked if you would like to open a shell in the new system to make any final manual modifications; choose Yes. A chroot shell on the new system will be started.

The /etc/fstab file for FreeBSD is gonna be really simple:

rpool/ROOT/fbsd            /                     zfs  rw  0  0
rpool/ROOT/fbsd/distfiles  /usr/ports/distfiles  zfs  rw  0  0

You need to create the distfiles directory:

mkdir -p /usr/ports/distfiles

And enable the zfs service to mount our home directory:

sysrc -f /etc/rc.conf zfs_enable="YES"

exit out of the chroot and choose to use the Live CD at the next prompt.

Log in as root.

If you want to boot using UEFI, copy the FreeBSD loader EFI executable to the EFI system partition:

mkdir /mnt/boot/efi
mount_msdosfs /dev/vtbd0p2 /mnt/boot/efi
mkdir /mnt/boot/efi/EFI/freebsd
cp /mnt/boot/loader.efi /mnt/boot/efi/EFI/freebsd/
umount /mnt/boot/efi

Unmount the datasets and export the pool:

zfs umount -a
umount /mnt
zpool export rpool

This is so we don't have to force-import the pool next.

Time to reboot! (Or poweroff.)

GRUB entry

Boot into Void again and edit the file /etc/grub.d/40_custom.

GRUB menuentries look like this:

menuentry 'Name' {
	# commands
}

Here is my entry for chainloading the second-stage FreeBSD loader for ZFS:

menuentry 'FreeBSD Loader' {
	insmod zfs # ZFS support for GRUB
	set root=(hd0,gpt1) # a partition in our zpool
	# Chainloading the zfs-capable bootloader
	kfreebsd /ROOT/fbsd@/boot/zfsloader
}

And this one should work for EFI:

menuentry 'FreeBSD Loader' {
	insmod fat
	insmod chain
	set root=(hd0,gpt2) # your EFI system partition
	chainloader /efi/freebsd/loader.efi
}

After adding one of those entries to /etc/grub.d/40_custom, regenerate your grub.cfg:

update-grub

And set the bootfs property on your zpool to the FreeBSD dataset - without that, the FreeBSD loader won't be able to find its config files:

zpool set bootfs=rpool/ROOT/fbsd rpool

You should now be ready to reboot into FreeBSD!

(You might need to log in as root and zfs mount -a after first boot.)

FreeBSD entry in GRUB FreeBSD Loader FreeBSD root login FreeBSD, logged in as someone

Installing OpenIndiana

The install process of OpenIndiana is just as straight-forward as the one for FreeBSD.

The install

After booting the install media and choosing a keyboard layout and a language, I pressed 3 and then enter to open a shell. This is probably only necessary because, again, the pool was mounted on a different host before.

Run:

zpool import -f rpool

I have found that the imported pool must not have a temporary mount point (-R) when installing OpenIndiana. I don't know why.

exit out of the shell and press enter to launch the installer.

Press F5 to install the OS to an existing zpool.

Our pool should be selected by default; for the BE name, I chose "hipster" here. I removed the check at to overwrite the pool's boot configuration by pressing space on that; I manage the boot configuration myself.

I then pressed F2 to continue and adjusted the installer's settings to my liking.

After the install finished successfully, press F9 to add some neccessary things.

Final touches

Open a shell again by pressing 3 and enter.

zfs list will show you two newly created datasets: rpool/ROOT/hipster and rpool/ROOT/hipster/var.

Because of our other operating systems, mountpoints like / and /var are not feasable; we will need to change them:

zfs set mountpoint=legacy rpool/ROOT/hipster

That should be enough (as mountpoints are inherited).

Mount the new root dataset:

mkdir /mnt/inst
mount -t zfs rpool/ROOT/hipster /mnt/inst

Again, we need to edit an fstab file; on OpenIndiana, it's at /etc/vfstab.
I will not change any existing entries, but just add the following ones:

rpool/ROOT/hipster     -    /    zfs    -    yes    -
rpool/ROOT/hipster/var -    /var zfs    -    yes    -

I have not used OpenIndiana on EFI yet, so I don't now if the following commands will work (especially the mount command - I have no idea how the fstypes are called on Solaris). But you should mount the EFI partition somewhere, and then copy /mnt/inst/boot/bootx64.efi to a folder on your EFI partition, e.g. /EFI/OI.

This should be enough!

Unmount the dataset, export the pool and reboot:

umount /mnt/inst
zpool export rpool
reboot

GRUB entry

The same deal as with FreeBSD.

Boot into Void again and edit /etc/grub.d/40_custom.

The nice thing is that OpenIndiana uses the same thing as FreeBSD these days; that means we can almost fully copy our FreeBSD entry:

menuentry 'OI Loader' {
	insmod zfs # ZFS support for GRUB
	set root=(hd0,gpt1) # a partition in our zpool
	# Chainloading the bootloader
	kfreebsd /ROOT/hipster@/boot/loader
}

The only big difference here is the loader's name, as OI does not come with a bootloader that's not capable of booting off zfs.

The entry for chainloading the EFI loader is also the same, except for the last command, which will be chainloader /efi/path/to/bootx64.efi.

Update GRUB, set the pool's bootfs and reboot:

update-grub
zpool set bootfs=rpool/ROOT/hipster rpool
reboot

You can now choose the OpenIndiana Loader in GRUB:

OI Loader GRUB entry OI Loader

During the first boot, the system will take some time to load smf(5) service descriptions; subsequent boots will not need that and be much faster.

root shell on OpenIndiana

(I don't know a lot about Solaris, so I can't say a lot more about this OS; I chose it as a third OS for this guide to demonstrate that three completely different operating systems can run from the same zpool.)

Conclusion

So, this is it; we have successfully installed and booted all three operating systems on a single zpool!

The approach is mostly workable, except for some small details (like having to set the bootfs); and it's definitely an improvement over the classic one of having different root partitions for each operating system.

[root@horst ~]# zfs list
NAME                        USED  AVAIL  REFER  MOUNTPOINT
rpool                      1.97G  17.2G    24K  none
rpool/HOME                 57.5K  17.2G    24K  none
rpool/HOME/someone         33.5K  17.2G  33.5K  /home/someone
rpool/ROOT                 1.97G  17.2G    24K  none
rpool/ROOT/fbsd             560M  17.2G   560M  legacy
rpool/ROOT/fbsd/distfiles    24K  17.2G    24K  legacy
rpool/ROOT/hipster          664M  17.2G   636M  legacy
rpool/ROOT/hipster/var     26.5M  17.2G  25.8M  legacy
rpool/ROOT/void             794M  17.2G   559M  legacy
rpool/ROOT/void/distfiles   235M  17.2G   235M  legacy

Three different operating systems, all sharing the same available space and, of course, all the regular benefits that come with zfs!

I'm very much looking forward to putting this approach into production once I get my new machine; I will report on how well it works over a longer period of time then.

One last small tip: Void does not want to boot if the pool was mounted by a different OS before (this only affects Linux, I don't know why).
To get around that, run zpool import -f rpool followed by exit (two times).
I don't know why you have to run it two times, but it works.

If you have any suggestions or ideas for improvements, noticed a typo or a grammar issue, please send me an e-mail so I can fix it.

Farewell for now!