Hi,
i noticed that my two of my backup VMs use internally less space than is seen on hipervizor.
Here is an example.
Inside VM:
/dev/sdb 4.0T 1.8T 2.2T 44% /backup
Outside VM:
rpool/data/vm-140-disk-1 2.70T 1.58T 2.70T -
VOLSIZE LUSED USED...
I have many backups for many VMs usig PBS.
Now I want to remove most for just one VM and one of it's disks.
I could click and delete one by one in PM or PBS GUI, however I would like to select multiple not to click thousands of times.
Any suggestions?
Hmm... if there was no downside, it would be the default setting, i think.
I guess you loose on disk space then, if block size in PM GUI for that datastore is set at 128k instead of 8k. Am I correct?
Does anyone see any other downsides?
I guess I will do some tests when I have the time and test...
It might be related to volblocksize of zvols.
If you have the time, please match volblocksize to size as is on disks, and then also match it in filesystem you use in your VM.
Do same tests then.
But hopefully someone with more experience will join this conversation.
I use zfs rename on the target and copy and maybe fix the VM config files from /var/lib/pve-zsync
after the first node is fixed and has no remnants of old VMS, I set up job in the opposite direction
also, you can manually run sync jobs, then do not have to be scheduled or even if they are
Hi,
i have replication set to every minute. This VM has Qemu agent, but is currently not working. When backup started, and replication as well, replication never actually started in is in syncing state:
162-0 Yes local/p35 2020-12-15_21:19:51 pending...
FYI I live migrated another 50 KVM VMs without issue, including WHM/cPanel/CloudLinux.
Live migration only failed for that VM with two disks.
Some time in the future, after I upgrade both nodes to most recent PM version, will test again.
If it fails agian, then it is reproducable, so I will open...
Hi guys.
all your suggestions are nice.
However live migration with LVM takes forever, because it synces whole VM, but if using ZFS with replication it takes just a few seconds.
However I solved this by installing intel microcode and now host uses much less CPU, or works as expected. :)...
Small update, I did live migrate 5 more VMs, and all worked.
Only one that died, was the WHM/cPanel VM with two disks.
Maybe it is related to number of disks, ...
A VM with two disks live migration failed.
It also died on source side.
Offline (as it was dead anyway) migration worked and VM recovered after.
I'm attaching the live migration log.
Should I report a bug or..?
Proxmox
Virtual Environment 6.2-12
Virtual Machine 142 (XYZ) on node 'p37'
Logs
()...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.