Sorry for reposting as last one got "Awaiting approval before being displayed publicly."
I'm running proxmox on 2 NVME 512GB sticks with zfs raid1 pool (ashift=12). I have written a small script that will measure number of writes per hour (in MB) using smartctl command (src: Github).
I have a single VM (OPNsense) that is writing around 70k per minute (showing in VM summary tab) which is less than 5MB / hour.
Now if I have VM off then my script (smartctl) reports 85MB/hour writes (it's lower because I disabled HA and bunch of other stuff) . And after turning on the VM, it jumps to 608MB/hour (around 500MB/hour more). I know that my SSD will still last 10+ years. I'm not worried about the SSD wear, I want to understand what is causing 100x write amplification and how is this even possible (5MB/hour becoming 500). What am I doing wrong?
#zpool get all
I'm running proxmox on 2 NVME 512GB sticks with zfs raid1 pool (ashift=12). I have written a small script that will measure number of writes per hour (in MB) using smartctl command (src: Github).
I have a single VM (OPNsense) that is writing around 70k per minute (showing in VM summary tab) which is less than 5MB / hour.
Now if I have VM off then my script (smartctl) reports 85MB/hour writes (it's lower because I disabled HA and bunch of other stuff) . And after turning on the VM, it jumps to 608MB/hour (around 500MB/hour more). I know that my SSD will still last 10+ years. I'm not worried about the SSD wear, I want to understand what is causing 100x write amplification and how is this even possible (5MB/hour becoming 500). What am I doing wrong?
#zpool get all
Code:
NAME PROPERTY VALUE SOURCE
rpool size 464G -
rpool capacity 2% -
rpool altroot - default
rpool health ONLINE -
rpool guid 16229665517427xxxxxx -
rpool version - default
rpool bootfs rpool/ROOT/pve-1 local
rpool delegation on default
rpool autoreplace off default
rpool cachefile - default
rpool failmode wait default
rpool listsnapshots off default
rpool autoexpand off default
rpool dedupratio 1.00x -
rpool free 453G -
rpool allocated 10.6G -
rpool readonly off -
rpool ashift 12 local
rpool comment - default
rpool expandsize - -
rpool freeing 0 -
rpool fragmentation 0% -
rpool leaked 0 -
rpool multihost off default
rpool checkpoint - -
rpool load_guid 16910426424055xxxxxx -
rpool autotrim off default
rpool compatibility off default
rpool feature@async_destroy enabled local
rpool feature@empty_bpobj active local
rpool feature@lz4_compress active local
rpool feature@multi_vdev_crash_dump enabled local
rpool feature@spacemap_histogram active local
rpool feature@enabled_txg active local
rpool feature@hole_birth active local
rpool feature@extensible_dataset active local
rpool feature@embedded_data active local
rpool feature@bookmarks enabled local
rpool feature@filesystem_limits enabled local
rpool feature@large_blocks enabled local
rpool feature@large_dnode enabled local
rpool feature@sha512 enabled local
rpool feature@skein enabled local
rpool feature@edonr enabled local
rpool feature@userobj_accounting active local
rpool feature@encryption enabled local
rpool feature@project_quota active local
rpool feature@device_removal enabled local
rpool feature@obsolete_counts enabled local
rpool feature@zpool_checkpoint enabled local
rpool feature@spacemap_v2 active local
rpool feature@allocation_classes enabled local
rpool feature@resilver_defer enabled local
rpool feature@bookmark_v2 enabled local
rpool feature@redaction_bookmarks enabled local
rpool feature@redacted_datasets enabled local
rpool feature@bookmark_written enabled local
rpool feature@log_spacemap active local
rpool feature@livelist enabled local
rpool feature@device_rebuild enabled local
rpool feature@zstd_compress enabled local
rpool feature@draid enabled local