Linux ZFS Notes
2. Installation
2.1. Rocky
-
Import the ZFS repository for Rocky Linux
sudo dnf -y install https://zfsonlinux.org/epel/zfs-release-3-0$(rpm --eval "%{dist}").noarch.rpm -
Confirm the ZFS package exists in the repo
Rundnf repolist | grep zfsSample outputzfs OpenZFS for EL10 - dkms -
Install the ZFS package
sudo dnf install -y zfsThis will take some time…
2.2. Fedora
-
Remove
zfs-fusesudo rpm -e --nodeps zfs-fuse -
Add
ZFSreposudo dnf install -y https://zfsonlinux.org/fedora/zfs-release-2-8$(rpm --eval "%{dist}").noarch.rpm -
Install kernal headers
sudo dnf install -y kernel-devel-$(uname -r | awk -F'-' '{print $1}') -
Install
ZFSpackagessudo dnf install -y zfsThis will take some time…
2.3. Common Config
-
Load kernal module
sudo modprobe zfs -
By default, ZFS kernel modules are loaded upon detecting a pool. To always load the modules at boot
This step is required for
ZFSto startup automatically on reboots.sudo echo zfs > /etc/modules-load.d/zfs.conf -
By default, ZFS may be removed by kernel package updates. To lock the kernel version to only ones supported by ZFS to prevent this:
sudo echo 'zfs' > /etc/dnf/protected.d/zfs.conf
4. Manage via cmd line
4.1. Create a Storage Pool
4.1.1. Get list of Disks
-
Method 1
sudo fdisk -l | grep Disk | grep sectors | grep -v loopExampleDisk /dev/sda: 153.39 GiB, 164696555520 bytes, 321672960 sectors Disk /dev/sdb: 698.64 GiB, 750156374016 bytes, 1465149168 sectors Disk /dev/sdc: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors Disk /dev/nvme0n1: 232.89 GiB, 250059350016 bytes, 488397168 sectors Disk /dev/sdd: 58.44 GiB, 62746787840 bytes, 122552320 sectors Disk /dev/zram0: 8 GiB, 8589934592 bytes, 2097152 sectors
-
Method 2
lsblk -o name,size,fstype,type,mountpoint | grep -v loopExampleNAME SIZE FSTYPE TYPE MOUNTPOINT sda 153.4G disk ├─sda1 153.4G part └─sda9 8M part sdb 698.6G disk ├─sdb1 698.6G part └─sdb9 8M part sdc 931.5G disk ├─sdc1 931.5G part └─sdc9 8M part sdd 58.4G disk └─sdd1 58.4G vfat part zram0 8G disk [SWAP] nvme0n1 232.9G disk ├─nvme0n1p1 600M vfat part /boot/efi ├─nvme0n1p2 1G ext4 part /boot └─nvme0n1p3 231.3G btrfs part /var/lib/containers/storage/overlay
4.1.2. Create a Storage Pool
-
Create one of the following
Storage Pooltypes:Single: Use the single 1TB disk to create a 1TBStorage Poolsudo zpool create SingleDisk /dev/sdcStripped: Combine the 165GiBsda& 750GiBsdbdisks into a 915GiBRAID 0Storage Poolsudo zpool create StrippedPool /dev/sda /dev/sdbMirrored: Combine thesda&sdbdisks into aRAID 1Storage Poolsudo zpool create MirroredPool mirror /dev/sda /dev/sdbRAID5: Combine thesda,sdb&sdcdisks into aRAID-ZStorage Poolsudo zpool create RAIDZPool raidz /dev/sda /dev/sdb /dev/sdc
4.2. Pool Status
-
Get the Pool statuses
zpool status | grep ONLINEExample outputstate: ONLINE SingleDisk ONLINE 0 0 0 (1) sdc ONLINE 0 0 0 state: ONLINE StrippedPool ONLINE 0 0 0 (2) sda ONLINE 0 0 0 sdb ONLINE 0 0 01 Single Disk Storage Pool2 Stripped Storage Pool -
List the
Storage Poolszpool listExample outputNAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT SingleDisk 928G 372K 928G - - 0% 0% 1.00x ONLINE - StrippedPool 849G 211K 849G - - 0% 0% 1.00x ONLINE -
4.4. Import Storage Pools
-
Find Pools that can be imported:
sudo zpool importSample outputpool: MirroredDisk id: 5839134755486227288 state: ONLINE action: The pool can be imported using its name or numeric identifier. (1) config: MirroredDisk ONLINE mirror-0 ONLINE sdb ONLINE sdc ONLINE1 This pool can be imported -
Import the Pool
sudo zpool import -f MirroredDisk -
List the imported Pool
sudo zpool listSample outputNAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT MirroredDisk 29.5G 11.3G 18.2G - - 3% 38% 1.00x ONLINE -
4.4.1. Troubled Pools
-
For an issue like the following:
sudo zpool import -f USB-16TB cannot import 'USB-16TB': I/O error Recovery is possible, but will result in some data loss. Returning the pool to its state as of Sun 11 Aug 2024 10:05:01 AM EDT should correct the problem. Approximately 10 seconds of data must be discarded, irreversibly. Recovery can be attempted by executing 'zpool import -F USB-16TB'. A scrub of the pool is strongly recommended after recovery.-
Force the import
sudo zpool import -fF USB-16TB -
Check
scrubstatusstatussudo zpool status USB-16TBscrub progresspool: USB-16TB state: ONLINE scan: scrub in progress since Sun Aug 11 00:24:25 2024 5.79T scanned at 0B/s, 4.09T issued at 220M/s, 5.79T total 0B repaired, 70.59% done, 02:15:08 to go (1) config: NAME STATE READ WRITE CKSUM USB-16TB ONLINE 0 0 0 usb-ATA_ST16000NT001-3LV_0123456789ABCDE-0:0 ONLINE 0 0 0 errors: No known data errors (2)1 Scrub progress with 2 hours 15 minutes to go 2 No discovered errors yet
-
4.5. Datasets
An existing ZFS volume can be divided into datasets. Permissions, compression, deduplication, and quotas can be set on a per-dataset basis, allowing more granular control over access to storage data….
4.6. Create a Dataset
-
Create the DataSet within an existing Pool
sudo zfs create RAIDZPool/Folder1 -
Show the created DataSet and its default mountpoint
zfs get all | grep mountpointOutputRAIDZPool mountpoint /RAIDZPool default RAIDZPool/Folder1 mountpoint /RAIDZPool/Folder1 default -
Change the
DataSet’smountpointsudo zfs set mountpoint=/mnt/RAIDZPool_Folder1 RAIDZPool/Folder1 -
Show the
DataSet’snew mountpointzfs get all | grep mountpointOutputRAIDZPool mountpoint /RAIDZPool default RAIDZPool/Folder1 mountpoint /mnt/RAIDZPool_Folder1 local (1)1 New mountpoint -
Permission the MountPoint
sudo chown -R mattosd:mattosd /mnt/RAIDZPool_Folder1