Search results

  1. bbgeek17

    HA migration issue with Linux VMs on Proxmox 9.0.10 (FC LVM datastore)

    I can only see this happen if the VM is powered off, then PVE HA will try to start it, potentially detect storage outage (as you correctly noted there seems to be no redundancy there), and then start it on another host. Linux Kernel will try IO indefinitely and the VM is not shutdown by any...
  2. bbgeek17

    Api dont working in a node of the cluster

    Hi @petruzzo, Can you provide the details: - actual command line you run with full output, good and bad - output of "pvecm status" - output "tail -f /var/log/pveproxy/access.log" during API execution - output of "journalctl -f " during command execution - pveversion from across the nodes...
  3. bbgeek17

    HA migration issue with Linux VMs on Proxmox 9.0.10 (FC LVM datastore)

    Hi @el_vagokz , welcome to the forum. There is no special sauce in PVE to enable automatic HA migration of VM nodes when storage connectivity is lost. Have you looked at the Windows logs during the failure window? Is it possible Windows shuts down (powers off), which causes the HA system to...
  4. bbgeek17

    Cluster reboot after adding a new node

    Hi @Jan Wedershoven , I seem to recall a few reports similar to yours. For example: https://forum.proxmox.com/threads/proxmox-cluster-of-21-nodes-falls-out-of-sync-when-a-single-node-is-added-or-removed.153149/ I'd recommend pursuing an official ticket and reporting back to community on your...
  5. bbgeek17

    iSCSI under performance

    Ugh, sorry to hear that. When your support contact calls Proxmox "the operating system", you know things are not going well. That said, for people who invested in storage systems that do not officially support PVE, we recommend hiding the PVE "badge" and presenting the client as Debian. In your...
  6. bbgeek17

    Sizing calculator

    "ZFS over iSCSI" is a special scheme that involves root SSH access into the storage appliance, requires the storage appliance to run ZFS internally, and directly manipulates said ZFS. And, of course, exporting the resulting ZFS volumes via iSCSI. You did not specify what SAN appliance you are...
  7. bbgeek17

    Windows guest IO performance

    One more question: For the LVM storage that you tested against. Did you have the "Allow Snapshots as Volume-Chain" enabled? It's critically important to understand whether you are testing on a native LVM LV or a QCOW nested within an LV. The latter is known to introduce performance issues with...
  8. bbgeek17

    Windows guest IO performance

    What caught my attention was the claim of reaching 400K IOPS inside a VM at queue depth 4. In practice, most NAND-based NVMe devices can’t sustain that level of performance. A partial explanation might be that your sequential I/O pattern, combined with the NVMe device’s read-ahead behavior, is...
  9. bbgeek17

    Best practic cluster proxmox

    Also, note that 192.90 is not a private IP space https://datatracker.ietf.org/doc/html/rfc1918#:~:text=3. Private Address Space Blockbridge: Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
  10. bbgeek17

    Best practic cluster proxmox

    You should place a switch on the back-end network. Blockbridge: Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
  11. bbgeek17

    Best practic cluster proxmox

    There is no "backup" traffic in basic PVE cluster. It is something you can add with an external node, ie PBS. Do you mean ZFS replication? If you do not have shared storage, then the VMs cannot failover when node fails. Are you planning to use ZFS replication? Your bridge config is wrong...
  12. bbgeek17

    Best practic cluster proxmox

    Hi @pelip , welcome to the forum. To answer your question directly : it is impossible for anyone to say as you provided insufficient data for analyses. Note that 10G is an overkill for Cluster communication. Are you running Ceph as well? Beyond this, you need to carefully analyze the logs of...
  13. bbgeek17

    Windows guest IO performance

    Hi @Antrill , welcome to the forum. Can you post your fio job definition? Blockbridge: Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
  14. bbgeek17

    Sizing calculator

    No , ZFS is not a Cluster Aware file system. Take a look at storage Wiki page for PVE for shared storage recommendations. https://pve.proxmox.com/wiki/Storage Blockbridge: Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
  15. bbgeek17

    Missing pve entry?

    Great! Feel free to mark this thread as solved then by editing the first post and selecting the prefix from subject dropdown. That will help to keep forum tidy. Blockbridge: Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
  16. bbgeek17

    Sizing calculator

    There isn’t a single answer that fits all environments, or even a set of best practices you can apply formulaically. The sizing really comes down to: The number of physical CPUs you have today The number of virtual cores you’ve provisioned Your overprovisioning ratio and actual utilization RAM...
  17. bbgeek17

    hotplug a ISO image to a running VM

    Glad I could help. You can now mark this thread as Solved to keep things tidy. You can do this by editing the first post and updating the tag via dropdown near the subject Blockbridge: Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
  18. bbgeek17

    hotplug a ISO image to a running VM

    Hi @mjam3204 , welcome to the forum. An IDE bus is not hot-plug aware. You can try switching to SCSI connected ISO for better luck Cheers Blockbridge: Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
  19. bbgeek17

    Missing pve entry?

    Hi @kolson3208 , welcome to the forum. You are likely misremembering. There is no universal "pve" entry in the left tree. You may have had one, ie storage pool name or VE name, but we wouldn't know about it. Your storage has question marks because it is likely not available/broken. If you run...
  20. bbgeek17

    data lost for time window

    It occurred to me, now that you have vendor confirmation, you can build a test PVE (perhaps even with nested virtualization). Build a VM where you can generate traffic that will overwhelm the backup throughput and temporary location. This should place your VM into precarious state. You can then...