[SOLVED] upgrade issue 8.0.2 - > 8.3.0, node not usable

ilia987

Well-Known Member
Sep 9, 2019
281
14
58
37
I have a cluster of around 15 nodes.

Today i successfully upgraded pbs from 3.0 to 3.3.2. (reboot and full test of backup\restore worked )

Then i Tried to upgrade a single node (from the GUI). all went normal and system worked after upgrade.
i Initiated a reboot, to make sure everything stable after reboot , including kernel, but i have issue connected to the host vie the gui, (ssh works) but the note is not fully working, it have network share issues (all marked as gray question mark) and as a result vm's and lxc's fail to start.

any tips how to investigate the issue ?

solution:

eventually i managed to stabilize the system,

for some reason updating and the rebooting the proxmox host from 8.0 to 8.3 that had some nfs mounts (hosted on stand alone truenass server)
caused the truenass share to generate errors (started on the upgraded host, after it is rebooted it tried to connect and failed) and then trough the weekend more and more servers that mounted the nfs had a problem with it.

fortunately rebooting the truenass server solved it.
 
Last edited:
systemctl restart pvestatd
After a 15sec all is fine again ? Else do listing (ls) inside your nfs mounted shares what's going on.
 
systemctl restart pvestatd
After a 15sec all is fine again ? Else do listing (ls) inside your nfs mounted shares what's going on.
no change, all networks are marked as down,
going inside network mounted storage (that marked as down) via cmd is correct, all file are there and accessible
1736253856501.png
 
Look to your ceph health.
Sorry, don't using ceph, you may find help in other threads here.
If having problems to ceph storage in pve it would be good to put in thread text also.
 
Look to your ceph health.
Sorry, don't using ceph, you may find help in other threads here.
If having problems to ceph storage in pve it would be good to put in thread text also.
Ceph working on all other nodes,
and the ladt mount(docs) in this picture is nfs based , hosted on qnap and it is accessible via cmd
 
Restart nfs service on qnap (for the docs mount). Looks better then ?
Mmh, but why are the ceph storages grey'ed ... normal when pvestatd cannot stat ... but why when it's working ... ?
"systemctl status pvestatd" says active and running ?
 
Restart nfs service on qnap (for the docs mount). Looks better then ?
Mmh, but why are the ceph storages grey'ed ... normal when pvestatd cannot stat ... but why when it's working ... ?
"systemctl status pvestatd" says active and running ?
i prefer not to touch working storage's (the cluster currently used our company)

systemctl status pvestatd shows enabled and running
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!