BtrFS r16 (See the current copy)

Butter Filesystem. Hold the toast.

I've started experimenting with BtrFS which aims to provide an "advanced and modern filesystem" (heavily compared to ZFS) on Linux. With my new workstation I've started using BtrFS for my home directories (/home) and my build directories (/mnt/slackbuilds) to gain exposure to the filesystem and compare it to ZFS and EXT4 on LVM (all of my other data, including my root disk is on EXT4 on LVM).

I have used ZFS heavily in the past, and using BtrFS is significantly different as many of the fundamental concepts are different. BtrFS has no concept of "pools" or "volume groups" -- instead there are "volumes." BtrFS has no concept of "datasets" or "logical volumes" -- instead there are "subvolumes".

Here's a comparison between ZFS, BtrFS, and EXT4 on LVM:

ZFS BtrFS EXT4 and LVM
Commands Involved zpool, zfs mkfs.btrfs, btrfs pvcreate, vgcreate, lvcreate, mkfs.ext4
Can be Boot filesystem Yes No No
Can be Root filesystem Yes Yes Yes
Can provide swapspace Yes (zvols) No Yes (lvm)
Pool of disks "zpool" "volume" "volume group"
Mountable unit "dataset" "volume" and "subvolume" "logical volume"
OSes with Implementations Solaris, OpenSolaris, Nexenta, FreeBSD, Mac OS X, Linux Linux Linux
Stability Stable Unstable (On-disk format Stable) Stable
CLI-System Integration [1] Strong Weak Mild
Grow Online Yes Yes Yes
Shrink Pool No Online Online
Shrink Filesystem No Online Offline
Replace Disk (without parity) Yes (must be compatible size disk) Yes Yes (copies space allocated on disk)
Filesystem Level Storage Pooling Yes Yes No
Re-balance No Yes Can be done manually (pvmove)
Checksumming Yes Yes No
Autocorrect Checksum Errors Yes ??? No
Compression Yes Yes No
De-duplication Yes No No
Ditto Blocks Yes ??? No
Tiered Caching Yes No No
Writable Snapshots Yes (clone) Yes Yes
Copy-on-Write Fast, space-efficient Fast, space-efficient Slow, requires pre-allocating an LV
Redundancy Mirroring and Parity (x1, x2, x3) Mirroring Mirroring, though the PVs can be RAID devices
Maximum Volume Size 16 Exabytes 16 Exabytes 1 Exabyte
Maximum File Size 16 Exabytes 16 Exabytes 16 Terabytes
Maximum Number of Snapshots Unlimited Unlimited Effectively 32

[1] For lack of a better term -- how well the command line interface integrates with the system as a whole, this might be subjective.

For a more complete, but less focused comparison see Wikipedia's Comparison of Filesystems


The Rosetta Stone

  1. Task: Create a pool of storage from disks /dev/A, /dev/B, and /dev/C (striped or linear concat)
    1. Using ZFS:
      1. # zpool create TESTPOOL A B C
    2. Using BtrFS:
      1. # mkfs.btrfs /dev/A /dev/B /dev/C
    3. Using EXT4 on LVM:
      1. # pvcreate /dev/A /dev/B /dev/C
      2. # vgcreate TESTPOOL /dev/A /dev/B /dev/C
  2. Task: Make storage from pool available to system
    1. Using ZFS:
      1. # zfs set mountpoint=/data TESTPOOL
    2. Using BtrFS:
      1. # mkdir /data
      2. # mount -t btrfs /dev/A /data
      3. Update /etc/fstab
    3. Using EXT4 on LVM:
      1. # mkdir /data
      2. # lvcreate -L SizeOfVolume -n DATA TESTPOOL
      3. # mkfs -t ext4 /dev/TESTPOOL/DATA
      4. # mount /dev/TESTPOOL/DATA /data
      5. Update /etc/fstab
  3. Task: Remove a disk from the pool
    1. Using ZFS: N/A
    2. Using BtrFS:
      1. # btrfs device delete /dev/A /data
      2. # btrfs filesystem balance /data
    3. Using EXT4 on LVM:
      1. # pvmove /dev/A
      2. # vgreduce TESTPOOL /dev/A
  4. Task: Replace operational disk
    1. Using ZFS:
      1. # zfs replace TESTPOOL A D
    2. Using BtrFS:
      1. # btrfs device add /dev/D /data
      2. # btrfs device delete /dev/A /data
      3. # btrfs filesystem balance /data
    3. Using EXT4 on LVM:
      1. # pvcreate /dev/D
      2. # vgextend TESTPOOl /dev/D
      3. # pvmove TESTPOOL /dev/A /dev/D
      4. # vgreduce TESTPOOL /dev/A