after upgrading to pve 6.1 we have had this occur during a kvm migration. we use ceph. it happened with just 3 out of 10 migrations
Check VM 207: precondition check passed
Migrating VM 207
Use of uninitialized value $val in pattern match (m//) at /usr/share/perl5/PVE/RESTHandler.pm line 441...
this is an interesting thread.. I have a couple of questions
1- the rules above , where those put to a .conf file or where? as the format does not look same as my next question
2- could you post output from
ceph osd crush rule dump
I remember tape backup and non raid systems, and crashes always at the worst time. like 3 hours after getting wisdom teeth pulled a lightning spike went to telephone pole to the phone wire attached to the motherboard. raid was a great improvement. then zfs and now we use ceph.
I'm curious -...
great that you use rsnapshot! we've used that for a very long time. note if you put it to a kvm never use lxc - as for some reason restore of a backup can take many hours. something to do with the hard links.
for documents we use nextcloud . we've a kvm for that. in case you are not...
1st what is a 'internodal' type network?
Are you using both ports on the existing nic?
If there is room to keep the existing nic and add the new one - then consider just adding the new one . ( Note on our systems we found that linux starts naming with the nic on the right [ when looking at...
backups and user data - i'd solve those 1st. there are multiple right options.
i'd say get backups fixed 1st . do you have extra disk slots in server chassis? or a spare system around that you can use for nfs? after backups are reliable then work on the rest
from what I've read go with one osd per hdd . if you have more then a little disk i/o you may want to add a separate cache drive like a fast nvme . do some research on the cache drive and how to set it up.
1ST I am not a ceph expert. A lot depends on how much disk i/o will occur.
I'd say you need very fast ssd to use for journals. We have not used journals for a long time so I can not give advise on how to set that up.
if the system gets laggy you may want to consider using nvme for...
I'd suggest using Intel DC series NVME or ssd. in the past we got good pricing on ebay.
we have these:
Hello. i see jocc = just out of curiosity..
anyway we ended up just following the instructions and all worked out.
also - while reading ceph release notes we saw there were health monitoring settings that can be turned on. that let to being exposed to a bad bug which caused all our vms to...
a few months ago we got ceph to work with zabbix. I had another thread on this , there are some hints there. check / search for that thread and send me a reply - i can dig up our configuration notes and post.