To add to this, today we decided to try the "fix" on the synology of unchecking the box "Reply to ARP requests if the target IP address is a local address configured on the incoming interface". It worked... for the 6.0 box, but the 5.4 boxes stopped being able to access the Synology NFS share...
We're experiencing the same issues here.
I ran this down to the point that we decided to roll back to version 5.4
So it's definitely something to do with rpcinfo probe failling in Proxmox v6.0. If I do "rpcinfo <synology-server-ip>" then I get back a list of services available on the Synology...
Interestingly, the VMs that were affected were all qemu. I have one lxc, and it came up without error.
I can certainly run the command on the affected disks and resolve the issue, but I'm a little concerned that this error only showed up after a reboot and not before.
Sure...
So I fixed the vm disk in question already using the command I mentioned before, so here's the fixed image info:
rbd image 'vm-100-disk-1':
size 8192 MB in 2048 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.2beb5b643c9869
format: 2
features...
Ok, I get that, and thank you.
However, what I'm after is why I might have received these errors in the first place when I was not attempting to use any extra features.
Are you indicating that following your suggestion will get Proxmox to switch to using nbd? At what point did my installation change to needing this?
I started out with the latest version 5 download and installed each node with the no-subscription repository and have kept them updated... so was...
Forgot to mention, following the instructions in the original error and running this code fixes the issue:
rbd feature disable test/vm-100-disk-1 object-map fast-diff deep-flatten
So what would have caused that image to report those unsupported features?
I'm bringing this thread back because I just experienced this same sort of behavior, and wanted to elaborate on what happened so the devs can take notice.
I have a testing cluster of 4 nodes in my home lab. I have Ceph set up across 3 nodes with different hard drives. I have a file storage VM...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.