Hello,
I got a proxmox cluster setup with TrueNAS as storage using ZFS over iSCSI with TheGrandWazoo plugin https://github.com/TheGrandWazoo/freenas-proxmox.
The proxmox cluster are running version 6.4-8 (pve-manager/6.4-8/185e14db (running kernel: 5.4.119-1-pve))
The TrueNAS server is running on version 12 (TrueNAS-12.0-U8).
I have set up the connection to the storage using the plugin from TheGrandWazoo, and I noticed some weird problems like snapshots taking forever to finish, or some VMs that I deleted was not deleted from the storage (ZVOL persists sometimes, and I have to manually delete it).
But my big concern are the VMs growing beyond the size I specify when I have created them. At this point I think my setup sounds not the optimal, due to some info I found on issues between PVE and TrueNAS, but at that point I didn't had this information, and this set was created on rush... unfortunatelly. I see other posts people are running ZFS locally on the PVE, but for us is important that the storage is accessible by our other nodes, so we can live-migrate VMs between nodes. Please share your thouths about this.
Well, this is what I know to the moment (I will take the worst case as exemple, VM ID 100):
The VM is configured with a 15TB virtual disk. No snapshots present.
On my storage by other hand, I see I much bigger volume size. How is this possible?
Since I don't find many setups like ours, using PVE with storage on TrueNAS, I am not finding some more information that helps me figure out the source of this problem. Maybe other more experienced admins have some suggestions?
I appreciate your help.
I got a proxmox cluster setup with TrueNAS as storage using ZFS over iSCSI with TheGrandWazoo plugin https://github.com/TheGrandWazoo/freenas-proxmox.
The proxmox cluster are running version 6.4-8 (pve-manager/6.4-8/185e14db (running kernel: 5.4.119-1-pve))
The TrueNAS server is running on version 12 (TrueNAS-12.0-U8).
I have set up the connection to the storage using the plugin from TheGrandWazoo, and I noticed some weird problems like snapshots taking forever to finish, or some VMs that I deleted was not deleted from the storage (ZVOL persists sometimes, and I have to manually delete it).
But my big concern are the VMs growing beyond the size I specify when I have created them. At this point I think my setup sounds not the optimal, due to some info I found on issues between PVE and TrueNAS, but at that point I didn't had this information, and this set was created on rush... unfortunatelly. I see other posts people are running ZFS locally on the PVE, but for us is important that the storage is accessible by our other nodes, so we can live-migrate VMs between nodes. Please share your thouths about this.
Well, this is what I know to the moment (I will take the worst case as exemple, VM ID 100):
Bash:
root@pve08:~# qm list
VMID NAME STATUS MEM(MB) BOOTDISK(GB) PID
100 XXXXXXXXX running 32768 15360.00 6259
102 XXXXXXXXX running 49152 4000.00 6405
114 XXXXXXXXX running 16384 2600.00 6473
root@pve08:~# qm config 100
agent: 1
boot: order=ide2;scsi0;net0
cores: 8
ide2: none,media=cdrom
memory: 32768
name: XXXXXXXXX
net0: virtio=XXXXXXXXX,bridge=vmbr0,firewall=1,rate=15,tag=300
net1: virtio=XXXXXXXXX,bridge=vmbr0,firewall=1,rate=70,tag=208
numa: 0
ostype: l26
scsi0: san01-datastore01:vm-100-disk-1,cache=writeback,discard=on,iops_rd=1000,iops_rd_max=2000,iops_wr=1000,iops_wr_max=2000,mbps_rd=100,mbps_rd_max=300,mbps_wr=100,mbps_wr_max=300,size=15T
scsihw: virtio-scsi-pci
smbios1: uuid=3fd78b62-0c60-4b2d-b693-9b408e27a698
sockets: 2
vmgenid: 6ad23a7a-4ec9-4e30-b29a-979c3fa1bead
root@pve08:~# qm listsnapshot 100
`-> current You are here!
The VM is configured with a 15TB virtual disk. No snapshots present.
On my storage by other hand, I see I much bigger volume size. How is this possible?
Bash:
root@san01[~]# zfs list pool01/vm-100-disk-1
NAME USED AVAIL REFER MOUNTPOINT
pool01/vm-100-disk-1 30.6T 19.1T 30.6T -
root@san01[~]# zfs list -t snapshot pool01/vm-100-disk-1
no datasets available
Since I don't find many setups like ours, using PVE with storage on TrueNAS, I am not finding some more information that helps me figure out the source of this problem. Maybe other more experienced admins have some suggestions?
I appreciate your help.