Linux ZFS Notes

1. Reference Docs

2. Installation

2.1. Rocky

  1. Import the ZFS repository for Rocky Linux

    sudo dnf -y install https://zfsonlinux.org/epel/zfs-release-3-0$(rpm --eval "%{dist}").noarch.rpm
  2. Confirm the ZFS package exists in the repo

    Run
    dnf repolist | grep zfs
    Sample output
    zfs                  OpenZFS for EL10 - dkms
  3. Install the ZFS package

    sudo dnf install -y zfs

    This will take some time…​

2.2. Fedora

  1. Remove zfs-fuse

    sudo rpm -e --nodeps zfs-fuse
  2. Add ZFS repo

    sudo dnf install -y https://zfsonlinux.org/fedora/zfs-release-2-8$(rpm --eval "%{dist}").noarch.rpm
  3. Install kernal headers

    sudo dnf install -y kernel-devel-$(uname -r | awk -F'-' '{print $1}')
  4. Install ZFS packages

    sudo dnf install -y zfs

    This will take some time…​

2.3. Common Config

  1. Load kernal module

    sudo modprobe zfs
  2. By default, ZFS kernel modules are loaded upon detecting a pool. To always load the modules at boot

    This step is required for ZFS to startup automatically on reboots.

    sudo echo zfs > /etc/modules-load.d/zfs.conf
  3. By default, ZFS may be removed by kernel package updates. To lock the kernel version to only ones supported by ZFS to prevent this:

    sudo echo 'zfs' > /etc/dnf/protected.d/zfs.conf

3. Manage with GUI

3.1. Via Cockpit

  1. Cockpit and the ZFS plugin will be required.

  2. Use Cockpit or the following cmdline instructions:

4. Manage via cmd line

4.1. Create a Storage Pool

4.1.1. Get list of Disks

  1. Method 1

    sudo fdisk -l | grep Disk | grep sectors | grep -v loop
    Example
    Disk /dev/sda: 153.39 GiB, 164696555520 bytes, 321672960 sectors
    Disk /dev/sdb: 698.64 GiB, 750156374016 bytes, 1465149168 sectors
    Disk /dev/sdc: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
    Disk /dev/nvme0n1: 232.89 GiB, 250059350016 bytes, 488397168 sectors
    Disk /dev/sdd: 58.44 GiB, 62746787840 bytes, 122552320 sectors
    Disk /dev/zram0: 8 GiB, 8589934592 bytes, 2097152 sectors
  2. Method 2

    lsblk -o name,size,fstype,type,mountpoint | grep -v loop
    Example
    NAME          SIZE FSTYPE TYPE MOUNTPOINT
    sda         153.4G        disk
    ├─sda1      153.4G        part
    └─sda9          8M        part
    sdb         698.6G        disk
    ├─sdb1      698.6G        part
    └─sdb9          8M        part
    sdc         931.5G        disk
    ├─sdc1      931.5G        part
    └─sdc9          8M        part
    sdd          58.4G        disk
    └─sdd1       58.4G vfat   part
    zram0           8G        disk [SWAP]
    nvme0n1     232.9G        disk
    ├─nvme0n1p1   600M vfat   part /boot/efi
    ├─nvme0n1p2     1G ext4   part /boot
    └─nvme0n1p3 231.3G btrfs  part /var/lib/containers/storage/overlay

4.1.2. Create a Storage Pool

  1. Create one of the following Storage Pool types:

    Single: Use the single 1TB disk to create a 1TB Storage Pool
    sudo zpool create SingleDisk /dev/sdc
    Stripped: Combine the 165GiB sda & 750GiB sdb disks into a 915GiB RAID 0 Storage Pool
    sudo zpool create StrippedPool /dev/sda /dev/sdb
    Mirrored: Combine the sda & sdb disks into a RAID 1 Storage Pool
    sudo zpool create MirroredPool mirror /dev/sda /dev/sdb
    RAID5: Combine the sda, sdb & sdc disks into a RAID-Z Storage Pool
    sudo zpool create RAIDZPool raidz /dev/sda /dev/sdb /dev/sdc

4.2. Pool Status

  1. Get the Pool statuses

    zpool status | grep ONLINE
    Example output
     state: ONLINE
            SingleDisk  ONLINE       0     0     0      (1)
              sdc       ONLINE       0     0     0
    
     state: ONLINE
            StrippedPool  ONLINE       0     0     0    (2)
              sda         ONLINE       0     0     0
              sdb         ONLINE       0     0     0
    1 Single Disk Storage Pool
    2 Stripped Storage Pool
  2. List the Storage Pools

    zpool list
    Example output
    NAME           SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
    SingleDisk     928G   372K   928G        -         -     0%     0%  1.00x    ONLINE  -
    StrippedPool   849G   211K   849G        -         -     0%     0%  1.00x    ONLINE  -

4.3. Delete a Storage Pool

  1. Destroy

    sudo zpool destroy SingleDisk

4.4. Import Storage Pools

  1. Find Pools that can be imported:

    sudo zpool import
    Sample output
      pool: MirroredDisk
        id: 5839134755486227288
     state: ONLINE
    action: The pool can be imported using its name or numeric identifier. (1)
    config:
    
            MirroredDisk  ONLINE
              mirror-0    ONLINE
                sdb       ONLINE
                sdc       ONLINE
    1 This pool can be imported
  2. Import the Pool

    sudo zpool import -f MirroredDisk
  3. List the imported Pool

    sudo zpool list
    Sample output
    NAME           SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
    MirroredDisk  29.5G  11.3G  18.2G        -         -     3%    38%  1.00x    ONLINE  -

4.4.1. Troubled Pools

  • For an issue like the following:

    sudo zpool import -f USB-16TB
    cannot import 'USB-16TB': I/O error
            Recovery is possible, but will result in some data loss.
            Returning the pool to its state as of Sun 11 Aug 2024 10:05:01 AM EDT
            should correct the problem.  Approximately 10 seconds of data
            must be discarded, irreversibly.  Recovery can be attempted
            by executing 'zpool import -F USB-16TB'.  A scrub of the pool
            is strongly recommended after recovery.
    1. Force the import

      sudo zpool import -fF USB-16TB
    2. Check scrub status

      status
      sudo zpool status USB-16TB
      scrub progress
        pool: USB-16TB
       state: ONLINE
        scan: scrub in progress since Sun Aug 11 00:24:25 2024
              5.79T scanned at 0B/s, 4.09T issued at 220M/s, 5.79T total
              0B repaired, 70.59% done, 02:15:08 to go    (1)
      config:
      
              NAME                                            STATE     READ WRITE CKSUM
              USB-16TB                                        ONLINE       0     0     0
                usb-ATA_ST16000NT001-3LV_0123456789ABCDE-0:0  ONLINE       0     0     0
      
      errors: No known data errors                        (2)
      1 Scrub progress with 2 hours 15 minutes to go
      2 No discovered errors yet

4.5. Datasets

An existing ZFS volume can be divided into datasets. Permissions, compression, deduplication, and quotas can be set on a per-dataset basis, allowing more granular control over access to storage data…​.

— http://doc.freenas.org/9.10/storage.html#create-dataset

4.6. Create a Dataset

  1. Create the DataSet within an existing Pool

    sudo zfs create RAIDZPool/Folder1
  2. Show the created DataSet and its default mountpoint

    zfs get all | grep mountpoint
    Output
    RAIDZPool          mountpoint            /RAIDZPool             default
    RAIDZPool/Folder1  mountpoint            /RAIDZPool/Folder1     default
  3. Change the DataSet’s mountpoint

    sudo zfs set mountpoint=/mnt/RAIDZPool_Folder1 RAIDZPool/Folder1
  4. Show the DataSet’s new mountpoint

    zfs get all | grep mountpoint
    Output
    RAIDZPool          mountpoint            /RAIDZPool              default
    RAIDZPool/Folder1  mountpoint            /mnt/RAIDZPool_Folder1  local  (1)
    1 New mountpoint
  5. Permission the MountPoint

    sudo chown -R mattosd:mattosd /mnt/RAIDZPool_Folder1

4.6.1. Remove a DataSet

  1. Destroy the DataSet

    sudo zfs destroy RAIDZPool/Folder1
  2. Remove the mountpoint

    sudo rm -R /mnt/RAIDZPool_Folder1

4.7. Zvols

A zvol is a feature of ZFS that creates a raw block device over ZFS. This allows you to use a zvol as an iSCSI device extent.

— http://doc.freenas.org/9.10/storage.html#create-zvol
ZVols can be used as virtual disk for VMs
  • TBA