I wanted to know what would be the risks of a broken VM in case of error (fs, network, disks) during a process of live migrating a VM without shared storage with the above command ? What is going to happen if cancelling during the process, is the VM partially moved ?
Indeed, we'd like to...
We connect to the host using a different port, however they talk to each other
Sorry, here we go :
2019-04-03 09:43:00 100-0: start replication job
2019-04-03 09:43:00 100-0: guest => VM 100, running => 0
2019-04-03 09:43:00 100-0: volumes => local-zfs-hdd:vm-100-disk-1
I'm having the same issue when I move back a VM to the original host, replication fails and I have to manually delete the snapshots of the affected VM to have the replication working again. Here is the logs when failing :
2019-04-03 08:28:03 100-0: end replication job with error: command 'set...
Thanks, here are the results :
zfs get all rpool/data/vm-133-disk-1 | grep used
rpool/data/vm-133-disk-1 used 145G -
rpool/data/vm-133-disk-1 usedbysnapshots 453M -
rpool/data/vm-133-disk-1 usedbydataset 145G...
I'm curious about the disk usage of a VM. Here is the config of that VM :
I'm having the same even after setting sysctl vm.swappiness=10 the SWAP usage does not reduce as it should and have plenty of RAM. Running PVE 5.2 :
proxmox-ve: 5.2-2 (running kernel: 4.15.18-7-pve)
pve-manager: 5.2-10 (running version: 5.2-10/6f892b40)...
The question may be stupid, however I need to update the switch where the nodes in a cluster are (not with shared storage).
It’s clear that doing this will make the quorum unavailable, does it make any problem right now ? I mean, let’s imagine the switch dies or whatever, should I be...
I've for both servers a license key (enterprise). However, on the master, the version Mail Gateway 5.0-78 is installed and on the slave, version Mail Gateway 5.1-2 is setup.
There is no update available on the master to "upgrade" to the 5.1-2. How should I upgrade to have the same...