Description:

Long story short. I decided to perform some storage benchmarks with different ZFS dataset/volume options. Right after I got kernel error messages (simple dd reading to /dev/null):

[ 4260.284273] blk_update_request: I/O error, dev sda, sector 774396620 op 0x1:(WRITE) flags 0x700 phys_seg 1 prio class 0
[ 4260.285275] zio pool=samsung_data vdev=/dev/vg_dmz/lv_zfs_data error=5 type=2 offset=363201927168 size=4096 flags=180880
[ 4260.286286] sd 0:0:0:0: [sda] tag#10 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE cmd_age=356s
[ 4260.287318] sd 0:0:0:0: [sda] tag#10 Sense Key : Illegal Request [current] 
[ 4260.288332] sd 0:0:0:0: [sda] tag#10 Add. Sense: Unaligned write command
[ 4260.289338] sd 0:0:0:0: [sda] tag#10 CDB: Read(10) 28 00 06 f0 e9 db 00 00 68 00
[ 4260.290340] blk_update_request: I/O error, dev sda, sector 116451803 op 0x0:(READ) flags 0x700 phys_seg 3 prio class 0
[ 4260.291372] zio pool=samsung_data vdev=/dev/vg_dmz/lv_zfs_data error=5 type=1 offset=26334180864 size=53248 flags=180880


SMART/scrub showed nothing suspicious. Tens of test iterations revealed that compression=zstd was the real cause of the problem (firstly I checked ashift, recordsize, volblocksize, volmode i.e. more relevant options).

BTW. I use zstd compression for volumes with other storages and it works like a charm, so I think the problem is a mix of Samsung SSD/ZFS internals. It doesn't affect datasets, only volumes. Underlying LVM volume was readable without any errors all the time, the problem appeared only with volumes during heavy I/O.

Some additional info (storage layers):

  1. Samsung SSD 850 PRO 512GB (EXM04B6Q, the latest firmware).
  2. LUKS (version 2).
  3. LVM thick (4m extent size).
  4. ZFS 2.0.5, ZVOL 8k.