Hi,
You need to assign PVEDatastoreUser on the PBS storage and PVEVMUser on the specific VMs they need access to. Please test these permissions to ensure they meet your requirements.
Hi,
If you want to avoid quorum loss, consider installing QDevice [0] on a Raspberry Pi to maintain quorum with minimal power usage.
[0] https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_corosync_external_vote_support
Hi,
Did you checked our wiki [0] regarding the network considerations?
[0] https://pve.proxmox.com/wiki/Proxmox_VE_inside_VirtualBox#Network_Considerations
Hi,
Did you try same your config but without `bridge_maxwait`? i.e.:
auto lo
iface lo inet loopback
auto enp5s0
iface enp5s0 inet manual
auto vmbr0
iface vmbr0 inet static
address 192.168.86.204/20
gateway 192.168.86.1
bridge-ports enp5s0
bridge-stp off
bridge-fd 0
Yes, should don't cause any problem. Since this was a solution for many users. And yes since you have enough RAM memory your server don't have to use the swap.
Hi,
Did you check the syslog during the joining cluster? You can use `journalctl -f` to see the syslog for any interesting error or warning!
Why you changed the SSH keys?
Does the Proxmox VE servers have the same version?
Hello,
You can disable swap in your node, that is was a solution of some users. You can stop/disable the swap using the following:
To disable the Swap, run the following command:
swapoff -a
In order to disable it permanently, edit the /etc/fstab file and disable the following line by...
Hi,
Have you tried to mount the CIFS shares manually using `umount -f /mnt/pve/SwFileServer-cifs` command? If not please can you try? after you unmount the CIFS, try to remove the storage, and re-add it from your PVE Web UI.
Hi,
Have you tried to disable the UEFI Boot, since older servers like HP Proliant DL360 may not support UEFI properly! The other solution can be is to install Proxmox VE on top of Debian as described in our wiki guide [0].
[0] https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_12_Bookworm
Thank you for the logs!
The losing quorum can trigger the HA manager to initiate a node reboot, which is what seems to be happening, since HA requires stable network communication to ensure that resources are available.
If you have only one ring_x in your corosync config, please consider...
Hi,
Thank you for the outputs!
in the server01 there is no `proxvg`:
root@server01:~# pvesm status
no such logical volume proxvg/proxthin
Name Type Status Total Used Available %
ThinDisk lvmthin active 9731174400 0...
Can you share what happened in the syslog when a node rebooted? This might help us identify what triggered the reboots.
Have you enabled the HA on those nodes?
The config you shared look to be fairly standard. However, can you try to add disable automatic reboot after the upgrade [0]?
Unattended-Upgrade::Automatic-Reboot "false";
EDIT: If the server shut down again please provide us with more entries of syslog.
[0]...
Hi,
Thank you for sharing more syslog!
Have you tried to restart pveproxy service? `systemctl restart pveproxy`
Have you tried to log in from a different/private browser as well? If the private login works you may have to clean the cache. If not, try to update the certificate using `pvecm...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.