Search results

  1. VictorSTS

    [TOTEM ] Retransmit List ... causing entire HA cluster to reboot unexpectedly.

    Keep in mind that in 2 node cluster, if one loses quorum, the other one will lose it too, as it won't have a majority of votes (will have just 1 vote with is exactly 50% of 2 votes total). A 2 node cluster + HA will not provide any redundancy/resiliency at all. At the very least, add a QDevice...
  2. VictorSTS

    Scrub won't complete on degraded ZFS pool

    To me seems that that drive that ends up in DEGRADED state is dying in some funky way that causes the behavior you see. I would make sure you have a backup, remove the failing drive, connect a new one and use zfs replace to resilver it. You could even add a third drive if it is a mirror, but...
  3. VictorSTS

    [SOLVED] Proxmox 8.0 / Kernel 6.2.x 100%CPU issue with Windows Server 2019 VMs

    Maybe same symptoms, but certainly a different root cause as this got sorted out in kernel 6.8, which is the default in PVE 8.4.1
  4. VictorSTS

    Problem with time within vms

    I would use QEMU Agent hook scripts instead, so you can run inside the VM which ever time sync command you need when the filesystem is thawed. Some details on [1] and [2]. Out of curiosity: which DB is it? Using Percona, MySQL GTID replication or Postgresql haven't seen that snapshot or...
  5. VictorSTS

    [SOLVED] high latency clusters

    https://pve.proxmox.com/wiki/Proxmox_Datacenter_Manager_Roadmap You'll run into issues eventually
  6. VictorSTS

    Memory usage graphic

    Feeling that I'm going to repeat myself a bit too much :), but... that is showing the configured RAM in the VM, not the used RAM. The green area will be drawn regardless of the power state of the VM or if it has ever been powered on. If you power on the VM, a blue are will be drawn, indicating...
  7. VictorSTS

    Corosync / HA - cluster wide reboot

    HA acts locally on each host and will fence a host if the host loses quorum. To lose quorum, corosync in that host has to decide that none of both link0 and link1 are operating properly (nic link down, switch down, too much jitter, too much packet loss, etc). As long as a host has at least one...
  8. VictorSTS

    Load balancing and redundancy

    Unless you can stack or run MLAG to create LACP bonds between ports of those two switches and if you really want to add the bandwidth, go for full mesh routed with FRR using two LACP bonds per server as links among them [1]. If 10G per node is enough, simply use an active-passive bond on PVE...
  9. VictorSTS

    Memory usage graphic

    It draws both total and used memory. The green graph is total, used would be blue an will show values only when the VM is/was running:
  10. VictorSTS

    Memory usage graphic

    But still has some memory configured in it's settings and pvestatd uses that to draw that graph irrespective to the VM running state.
  11. VictorSTS

    Memory usage graphic

    The green area simply shows the total memory configured for a VM and will always be drawn, reflecting the amount of RAM configured to the VM. The blue are is the used ram and will be drawn only if the VM is running, reflecting the amount of used memory from the point of view of QEMU/PVE.
  12. VictorSTS

    How to use private IP for proxmox cluster nodes to communicate

    /etc/hosts must have only one entry referencing the host, remove the entry with the public IP. Also, restart pve-proxy service in node 01, as the error you get is when connecting from node 02 to the API of node 01. Then, use webUI to join node 02 to the cluster using assited join. You dont...
  13. VictorSTS

    How to use private IP for proxmox cluster nodes to communicate

    What's the exact contents of these files in your 01 node (the only one with cluster at the moment)?: - /etc/hosts - /etc/pve/.members Your /etc/hosts must point to an IP of the host, given you want nodes to communicate via the internal lan, should be like this: 172.16.1.2 proxmox1.domain.tld...
  14. VictorSTS

    PBS prune is removing backups that should be kept

    Look for the backups kept due to keep-monthly, in blue (you may need to increase Duration). Those green ones are correctly kept due to the keep-weekly policy, hence the last backup on Sunday is kept. That's what is happening because PBS prunes the last backup of each month. When time comes to...
  15. VictorSTS

    PBS prune is removing backups that should be kept

    @SteveITS Same behavior with v3.1-x to v3.4-x. Didn't notice before because we seldom need to recover anything a few days old, fortunately. Thanks for pointing that out. February's backup that got kept was on vm/1008/2025-02-23T20:11:01Z. Same situation in every month: the backup of the last...
  16. VictorSTS

    PBS prune is removing backups that should be kept

    I have a PVE cluster (v8.1.4) doing backups to a PBS server. The backups are done every 3 hours with a */3:11 schedule. On PBS (v3.2-2), I have a prune policy setting like this: keep-daily 30 keep-hourly 24 keep-last 3 keep-monthly 12 keep-weekly 4...
  17. VictorSTS

    ZFS 2.3.0 has been released, how long until its available?

    You know, I would never ever use this in any of my PVE systems until it is officially released (I simply love stability), but thank you all with more courage than me to test it before I ever think of using it. Real world benchmarks with same hardware and PVE workloads would be helpful for...
  18. VictorSTS

    Dos Network Support

    You will have to manually edit the VM config (/etc/pve/local/qemu-server) and set the model in the line for the nic as ne2k_isa or ne2k_pci. All the details here [1]. [1] https://pve.proxmox.com/wiki/Qemu/KVM_Virtual_Machines#qm_configuration
  19. VictorSTS

    Quorum During Disaster

    Avoid streching a PVE cluster, it isn't that good of an idea. At the very least you will need to make sure each side have the same amount of votes, and add a QDevice in a third location that helps with quorum. Get ready to deal with nodes with different votes (i.e. when a server is down for any...