

Fortunately in case of issues I have 12 weeks of snapshots on my backup To be absolutely sure, tonight I will run "zpool scrub" again for all my 3 datapools and that takes hours. Today I did run my weekly backup again and then all changes to all 3 datapools are send to my 2 backups and there was no error reported. I detected no issues with ZFS not in 21.04 nor in 21.10, only once I had one scrub error on my laptop, corrected automatically by OpenZFS 2.0.2 (Ubuntu 21.04). Of course all those combinations should work, but sticking to official Ubuntu releases is far more secure In the comments often you see the people trying out different versions of zfs-dkms, some work with that kernel and others not. I have the impression, that combinations of kernel releases and ZFS releases were used, that were not used and tested in Ubuntu.
#Debian openzfs 2.0 manual
Reading those bug report's comments I often see: encryption and manual updates of ZFS from Github. I have updated to 21.10 this week from 21.04, but reading the 9 months long list of comments on the bug report, Ubuntu 21.04 had the same type of problem. The first bug report is from January about "zfs-dkms 0.8.4-1ubuntu16", which means Ubuntu 20.04 or 20.10 in the time of ZFS on Linux (ZoL), thus before OpenZFS. Well the error seems to be present in all releases and not only in 21.10.
#Debian openzfs 2.0 upgrade
I did already upgrade on the 13th of October not a Friday fortunately Now we can boot without dealing with GRUB.DON"T PANIC, but to be absolutely sure, postpone the upgrade to 21.10 with some weeks. Or whatever else is listed here so I can issue a correxion thanks in advance, &c.: (the root= option assumes you installed to the second partition of the first SCSI drive, as I did adjust to taste), (the shell should support tab-completion, you might need to add a space before completing the initrd) If not, and sd-boot shows errors or doesn't start at all boot into the EFI shell,įs0:, and \\\linux initrd=\\\initrd.img- root=/dev/sda2 I'd recommend rebooting now to verify that this works, which should look like this: If you don't get the "Installing" and "Creating" lines on a systemd pre- v250 system, The initial run takes a long time, hence the -v 62dd03a4928c412180b3024ac6c03a90 is this machine's ID. There's a prior art that was of little help, and Most-all gotchas are hopefully explained, The test setup is QEMU -bios OVMF.fd and two 8G drives, one of which is designated as primary.įilesystem tuning is not covered, encryption is supported, SecureBoot is not covered because I haven't figured it out yet, The bootloader will be fixed, ZFS installed normally, and the rootfs dumped/ restored thereonto after normally booting into the target system. This means that all you need is an EFI-compatible multi-disk platform and some way to EFI boot it into d-i. Since doing d-i's job by hand was covered previously in Installing Debian's x32 port in 2020,Īnd it's a similarly good experience (plus building the modules in a chroot is a good time at any rate).
