Connection timed out (596) on local storage (cluster)

pos-sudo

Member
Jun 13, 2021
72
7
13
Dear,

I've created an cluster with two nodes. The problem seems to be on one node, currently there are running VM's on the local storage, but if I want to create an new VM he can't find the local storage during an communication failure / timeout (0) / connection timed out (596). I see the stats of the storage and the disks on that storage as well. Could anyone help me related to this issue?

Note: if I run pvesm status I see the local disk as well.

The weird thing is on the other node are no troubles. An reboot is not preffered because of in production running VM's.

Thanks in advanced!
 

Attachments

  • Knipsel.PNG
    Knipsel.PNG
    40.9 KB · Views: 65
Are you using the same storage setup on both nodes? Otherwise you need to ensure that each storage is configured to only be available on the correct node in "Datacenter -> Storage". Note that when joining a node to a cluster it's own storage config is overwritten, so if, say, the second node is using LVM and the first ZFS, the LVM local storage will be overwritten on join and needs to be re-added manually.

If possible, can you describe your node's storage setup more and also post the contents of '/etc/pve/storage.cfg'?
 
Are you using the same storage setup on both nodes? Otherwise you need to ensure that each storage is configured to only be available on the correct node in "Datacenter -> Storage". Note that when joining a node to a cluster it's own storage config is overwritten, so if, say, the second node is using LVM and the first ZFS, the LVM local storage will be overwritten on join and needs to be re-added manually.

If possible, can you describe your node's storage setup more and also post the contents of '/etc/pve/storage.cfg'?
Thank you for your reply. The Issue is fixed, the cause of this Issue was a NFS storage which was not mounted correctly. So the pvestatd service throw back the communication error. Only weird thing is that I wasn't be able to see the current DISK space on the local disks, so somehow the mount.nfs has effect on all of the disks statistics.
 
Yes, accessing a file/directory on a bad NFS mount causes the calling process to lock up (since the syscall hangs). This manifests itself as a hanging pvestatd on that node, meaning it can no longer provide info for any of the available storages.
 
Yes, accessing a file/directory on a bad NFS mount causes the calling process to lock up (since the syscall hangs). This manifests itself as a hanging pvestatd on that node, meaning it can no longer provide info for any of the available storages.
thank you for the additional explanation. Have a nice day.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!