hi all,
I'm testing a new server for proxmox (zfs) and have 4x samsung 860PRO 512GB SSDs there.
all the documentation i found seem to say those disks have 4k physical block size.
yet all the tools (fdisk, hdparm, parted, stat etc..) report 512 for both physical and logical.
maybe some has experience with the same disks and can confirm if they are 4K native or not?
and if indeed it's 512, do you have any recommendations for volblocksize/ashift/compression on the pool? And if ZFS still uses the smallest block available regardless of what you set on the pool creation, is it even possible to trick zvol (or dataset) to use 4K minimum?
(initial idea for the server is gitlab-runners, both LXCs and VMs. so cpu heavy and disk write heavy)
p.s. now i just have default:
ashift=12
volblocksize for VMs a default 8K..(which i've seen not to be the best choice for sure)
thank you very much!
	
	
	
		
				
			I'm testing a new server for proxmox (zfs) and have 4x samsung 860PRO 512GB SSDs there.
all the documentation i found seem to say those disks have 4k physical block size.
yet all the tools (fdisk, hdparm, parted, stat etc..) report 512 for both physical and logical.
maybe some has experience with the same disks and can confirm if they are 4K native or not?
and if indeed it's 512, do you have any recommendations for volblocksize/ashift/compression on the pool? And if ZFS still uses the smallest block available regardless of what you set on the pool creation, is it even possible to trick zvol (or dataset) to use 4K minimum?
(initial idea for the server is gitlab-runners, both LXCs and VMs. so cpu heavy and disk write heavy)
p.s. now i just have default:
ashift=12
volblocksize for VMs a default 8K..(which i've seen not to be the best choice for sure)
thank you very much!
		Code:
	
	root@revolver:~# pveversion -v
proxmox-ve: 6.2-1 (running kernel: 5.4.44-2-pve)
pve-manager: 6.2-6 (running version: 6.2-6/ee1d7754)
pve-kernel-5.4: 6.2-4
pve-kernel-helper: 6.2-4
pve-kernel-5.3: 6.1-6
pve-kernel-5.4.44-2-pve: 5.4.44-2
pve-kernel-5.4.44-1-pve: 5.4.44-1
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.3.18-2-pve: 5.3.18-2
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.4
libpve-access-control: 6.1-1
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.1-3
libpve-guest-common-perl: 3.0-10
libpve-http-server-perl: 3.0-5
libpve-storage-perl: 6.1-8
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.2-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-8
pve-cluster: 6.1-8
pve-container: 3.1-8
pve-docs: 6.2-4
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-2
pve-firmware: 3.1-1
pve-ha-manager: 3.0-9
pve-i18n: 2.1-3
pve-qemu-kvm: 5.0.0-4
pve-xtermjs: 4.3.0-1
qemu-server: 6.2-3
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.4-pve1
root@revolver:~# zpool get ashift
NAME   PROPERTY  VALUE   SOURCE
rpool  ashift    12      local
root@revolver:~# zfs get volblocksize
NAME                      PROPERTY      VALUE     SOURCE
rpool                     volblocksize  -         -
rpool/ROOT                volblocksize  -         -
rpool/ROOT/pve-1          volblocksize  -         -
rpool/data                volblocksize  -         -
rpool/data/vm-100-disk-0  volblocksize  8K        default 
	 
	 
 
		 
 
		