Search results

  1. se4n_1

    Default Proxmox Firewall Settings

    Morning, So I would like to compartmentalize my containers in proxmox with the proxmox firewall. Currently in datacenter options I have: firewall=no If I modify this to firewall=yes with firewall options out ACCEPT and in ACCEPT in the firewall tab in the datacenter, is this the same rules as...
  2. se4n_1

    [SOLVED] Optimum Method to Test Proxmox HA

    Thanks Aaron, that is very clear. I have a separate fast network for storage/backup traffic so the corosync/data network is never congested. To summarize this for posterity (please correct me if I am wrong): 1. Graceful reboot does not automatically fail-over HA as the ha config in...
  3. se4n_1

    [SOLVED] Optimum Method to Test Proxmox HA

    nice idea Aaron, is there a way of setting that watchdog timer lower?
  4. se4n_1

    [SOLVED] Optimum Method to Test Proxmox HA

    Right so it seems to me that these settings can be controlled in /etc/pve/datacenter.cfg and by adding ha: shutdown_policy=failover Will cause the desired behaviour to be observed. I will test this and mark the thread solved if this indeed is the case.
  5. se4n_1

    [SOLVED] Optimum Method to Test Proxmox HA

    Good morning all, I have set up my Proxmox 6.2-12 nodes and enabled high availability for my VMs and containers and now I would like to test the HA functionality. I tried rebooting a node but quickly found this does not work - the VMs remain in a frozen state as the shutdown is graceful. Short...
  6. se4n_1

    [SOLVED] corosync-qdevice.service fails to start with 'received server error 18. Disconnecting from server'

    OK! Yes it works flawlessly! For posterity, one must use ring0_addr in corosync conf and any other value will not work - at least until this bug(?) is fixed. Thank you Fabian for your laser focus on spotting the issue so fast!
  7. se4n_1

    [SOLVED] corosync-qdevice.service fails to start with 'received server error 18. Disconnecting from server'

    Just for completeness here is the debug log from the qnetd on qdevice when I restart the qdevice service on one of the nodes: To be clear it seems the issue in the mailing list above is similar but different. My nodelist in corosync.conf is above the quorum entries and does have the correct...
  8. se4n_1

    [SOLVED] corosync-qdevice.service fails to start with 'received server error 18. Disconnecting from server'

    Hi there sure, I just saw this as well which is exactly along the lines you mentioned: https://www.mail-archive.com/users@clusterlabs.org/msg07326.html # cat /etc/corosync/corosync.conf logging { debug: off to_syslog: yes } nodelist { node { name: srv-pve-01 nodeid: 2...
  9. se4n_1

    [SOLVED] corosync-qdevice.service fails to start with 'received server error 18. Disconnecting from server'

    Tried a few more things this morning but none bore fruit: - Check ports 5403 open on all nodes+qdevice - Telnet from each node to 5403 on QDevice - reboot QDevice and confirm telnet again Seems network is OK - is there a chance the qdevice version in Debian 10 repos and the one in Proxmox 6.2...
  10. se4n_1

    [SOLVED] corosync-qdevice.service fails to start with 'received server error 18. Disconnecting from server'

    I have a problem getting a QDevice to work on proxmox 6.2-12 First I install the QDevice package on the 3rd witness (Raspberry Pi OS 20-08-2020) box: # apt install corosync-qnetd Reading package lists... Done Building dependency tree Reading state information... Done The following NEW...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!