I just resolved this issue, and it turned out to be a very simple fix.
I ran a packet sniff on the NFS server, where rpcinfo was successfully captured. However, showmount, which uses the same RPC protocol, did not work, and the packet sniffer did not capture any packets.
I decided to check...
After running showmount locally on the NFS server:
jignesh@bkp2:~$ time sudo showmount -e localhost
Export list for localhost:
/opt/DISK1/backuphdd/OTHERS/elasticsearch-backup 192.168.13.102
/opt/DISK2/XXXXCluster2_VM_BACKUP 192.168.13.33,192.168.13.32,192.168.13.31
real...
So, I am able to reach the NFS server from all my cluster nodes. Ping, telnet works. showmount gets a timeout.
I am able to mount on all my nodes manually, but the logs still are flooded with storage 'nfsbackup' is not online. Kindly refer outputs below as requested.
root@node01:~# ping -c 4...
Hello,
So we are currently facing one issue with one nfs mount which was mounted on our cluster. We only used it for VM backups.
We started facing this issue after our NFS server(debian 11) was rebooted.
pvestatd is flooding the log with:
Sep 14 23:06:00 node01 pvestatd[3331]: storage...
CEPH OSD FLAGS
noout -- If the mon osd report timeout is exceeded and an OSD has not reported to the monitor, the OSD will get marked out. The “noout” flag tells the ceph monitors not to “out” any OSDs from the crush map and not to start recovery and re-balance activities, to maintain the...
Happened when we tired to hot swap a faulty HDD and an incorrect one was pulled out. the drive letter was changed and hence OSD was unable to detect. I simply deleted the OSD which went down and created it again with the below steps.
Set OSD flags for the pool does not start rebalancing .
on...
@shrdlicka Yup referred and did the exact same thing which got things moving forward for me. Felt like a major block during this migration but luckily found it, implemented it and it works. I just forgot to post here :D
Hello @leesteken, I just resolved this issue by doing the following changes.
Check the following file:
cat /sys/module/vhost/parameters/max_mem_regions which had 64
--------------------------------------------------------------------------------------------------
Make following changes on all...
I am facing an issue were I migrated a VM from proxmox 6.4.x to 7.3
1. We implemented a new setup with 3 node cluster of 7.3.
2. Took a backup of the VM on old setup of 6.4
4. Restored the backup snapshot on new setup of 7.3
5. In old setup we had processor type default kvm64 and new we are...
I am facing a similar issue were I migrated a VM from proxmox 6.4.x to 7.3.
1. We implemented a new setup with 3 node cluster of 7.3.
2. Took a backup of the VM on old setup of 6.4
4. Restored the backup snapshot on new setup of 7.3
5. In old setup we had processor type default kvm64 and new we...
Hello, So We are having a 3 node cluster up and running in productions. One of our node went down and when it got back up everything was back to normal. But the only thing that was an issue is one OSD on this node was showing down/out under ceph > OSD. The HDD in reference to the osd is working...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.