We've upgraded our PVE cluster of ndoes from 5.4 to 6.2 few weeks ago, we also upgraded our ZFS pool to the version 0.8.4. We also added an iSCSI storage (FreeNAS) which hosts now most of the storage.
I'm now seeing that the ZFS cache is like eating a little bit too much of RAM (this is...
When you say "LVM over SCSI", can I access the same LVM twice, or I have to setup two LVM, one by host ? If the the second option is right, it means that the files are not shared between the two LVM, is it correct ?
It's certainly written somewhere, but I didn't find it on https://pve.proxmox.com/wiki/Storage:_iSCSI
It's said that it's generally not recommended to mount multiple iSCSI LUN on a host as it leads to data corruption.
Therefore, how does Proxmox handle this when used inside a cluster...
I've found the culprit! FreeNAS is causing the issue and especially the disks inside it (full ssd, busy at 100% during the "not reachable time"). Now I'm trying to find why they are saturing... so nothing to do from Proxmox I think.
I recently added a FreeNAS server to my network and have setup it with Proxmox as an NFS server.
Ping is ok, no loss at all. Network is 10 Gbits.
Randomly, I get the following error on my nodes :
Jul 3 02:12:45 athos pvestatd: unable to activate storage 'freenas-a' - directory...
Hi, thank you very much for your reply, very interesting as I was thinking about doing the same.
Except DB, you did not experienced big issues when switching to a cloned snapshot from the second NAS ? Like read only VM or corrupted VM ?
How do you manage the updates from
FreeNAS on your both...
I'm looking into building a FreeNAS server which will host my VMs through a NFS share.
As Proxmox does not support snapshot on raw image hosted by an NFS server, I'm trying to find a way to do this in an "easy" way.
I was thinking about creating a dataset per VM which will allow me to...
I wanted to know what would be the risks of a broken VM in case of error (fs, network, disks) during a process of live migrating a VM without shared storage with the above command ? What is going to happen if cancelling during the process, is the VM partially moved ?
Indeed, we'd like to...
We connect to the host using a different port, however they talk to each other
Sorry, here we go :
2019-04-03 09:43:00 100-0: start replication job
2019-04-03 09:43:00 100-0: guest => VM 100, running => 0
2019-04-03 09:43:00 100-0: volumes => local-zfs-hdd:vm-100-disk-1
I'm having the same issue when I move back a VM to the original host, replication fails and I have to manually delete the snapshots of the affected VM to have the replication working again. Here is the logs when failing :
2019-04-03 08:28:03 100-0: end replication job with error: command 'set...
Thanks, here are the results :
zfs get all rpool/data/vm-133-disk-1 | grep used
rpool/data/vm-133-disk-1 used 145G -
rpool/data/vm-133-disk-1 usedbysnapshots 453M -
rpool/data/vm-133-disk-1 usedbydataset 145G...
I'm curious about the disk usage of a VM. Here is the config of that VM :