Proxmox and NFS with DRBD

taich

Member
Aug 15, 2020
116
18
23
64
Hi,

I run a Proxmox Cluster with 3 nodes and a Pacemaker Cluster with NFS on 2 nodes.
I use the NFS Cluster as a HA storage.

All that worked fine for a while. I could switch from NFS node1 to node2 without problems.
Running VM's on Proxmox did not recocnize the change.

But since 2 or 3 months all VM's have disks monted read only after every switch of the NFS storage.

Why cant the VM's survive a storage cluster switch?
 
There should be information in dmesg/journal as to why Kernel marks NFS r/o. Most likely the transition takes longer than default timeout. If that is the case, one solution is to increase the timeout. Another approach would be to figure out why the transition takes longer than default timeout. Or you can switch to hard mount.


Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
I could not find where to increase increase timeout, or hard mount. Where does this configuration happen?
 
Oh sorry, I use it as a proxmox storage, where VM's have their "harddisk".

Thank you for the hint.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!