I'm seeing some odd things happen on a VM. I've been doing auto snapshots with cv4pve-autosnap, then copying the data across to another location using pve-zsync.
The VM running is using rpool/data/vm-100-disk0; however cv4pve-autosnap is taking snapshots on a disk that seems to be from a previous snapshot restore rpool/data/vm-100-state-BeforeOPUD
This in turn is causing pve-zsync to take for freaking ever to copy. Days. Historically this ran in a few hours.
I've deleted all snapshots, so I can start fresh, and rebooted the server.
Additionally, this is a windows 2012R2 VM, and the space shown in zfs greatly exceeds the used space in the windows VM. Used space in zfs shows 1.19TB, but windows shows 429GB used.
I worry about deleting what I believe is the unused dataset. Is there a good way to verify that it actually isn't being used? Can the two be combined if they are?
More information below.
PVE config
ZFS Space:
Snapshots:
Trim to try and get some space back
autotrim enabled
The VM running is using rpool/data/vm-100-disk0; however cv4pve-autosnap is taking snapshots on a disk that seems to be from a previous snapshot restore rpool/data/vm-100-state-BeforeOPUD
This in turn is causing pve-zsync to take for freaking ever to copy. Days. Historically this ran in a few hours.
I've deleted all snapshots, so I can start fresh, and rebooted the server.
Additionally, this is a windows 2012R2 VM, and the space shown in zfs greatly exceeds the used space in the windows VM. Used space in zfs shows 1.19TB, but windows shows 429GB used.
I worry about deleting what I believe is the unused dataset. Is there a good way to verify that it actually isn't being used? Can the two be combined if they are?
More information below.
PVE config
Code:
cat /etc/pve/qemu-server/100.conf
balloon: 0
boot: dcn
bootdisk: virtio0
cores: 6
memory: 32768
name: COMPANY_VM
net0: virtio=EA:CE:C6:F1:13:EB,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: win8
parent: BeforeOPUD
protection: 1
scsihw: virtio-scsi-pci
smbios1: uuid=cd6176e8-9f99-4cbe-a263-8fa5ea79590a
sockets: 2
startup: order=2
virtio0: local-zfs:vm-100-disk-0,size=1000G
vmgenid: 4f2bf1b3-01e4-4c28-b690-02918fb33952
ZFS Space:
Code:
~# zfs list -o space
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
rpool 3.27T 1.53T 0B 222K 0B 1.53T
rpool/ROOT 3.27T 19.0G 0B 205K 0B 19.0G
rpool/ROOT/pve-1 3.27T 19.0G 0B 19.0G 0B 0B
rpool/data 3.27T 1.51T 0B 205K 0B 1.51T
rpool/data/vm-100-disk-0 3.27T 1.19T 0B 1.19T 0B 0B
rpool/data/vm-100-state-BeforeOPUD 3.27T 53.9G 0B 53.9G 0B 0B
Snapshots:
Code:
# zfs list -t snapshot
no datasets available
Trim to try and get some space back
Code:
zpool trim rpool
cannot trim: no devices in pool support trim operations
Code:
zpool get autotrim rpool
NAME PROPERTY VALUE SOURCE
rpool autotrim on local