Search results

  1. bbgeek17

    Correcting Storage

    Hi @nleistad, Confidently attributing the issue to NFS, QCOW, ZFS, network, or any other component requires proper analysis. This typically includes log review, reproducible testing, and potentially network trace reading. There are known edge cases with QCOW on NFS, particularly around...
  2. bbgeek17

    Problem connecting Proxmox 9.1 to the storage ESXi for migrate VM's from ESXI

    Hi @Vladyslav , There is no documentation, as far as I am aware, that can guide you to use a non-root account. That said, you can use a tool (for example "govc") that uses the same network/API path as PVE and troubleshoot the connectivity easier than trying to bring up the ESXi storage in PVE...
  3. bbgeek17

    MTU Settings for NAS storage

    Hi @Eric Thornton , welcome to the forum. MTU size is not tied to network speed. You can use non-standard MTU values on 1 Gbit just as well as on 25 Gbit or higher links. The key point is consistency: all devices participating in the same network path must use the same MTU. This includes all...
  4. bbgeek17

    Proxmox and Veeam Backup and Replication worker issue

    I recommend that you figure out a curl based way to upload a file to local storage with the same account that Veeam is using. Run it local to PVE first, if that works - run it from the Veeam network segment. If that works, convert it to PS command and test from Veeam server. Network issue is...
  5. bbgeek17

    Proxmox and Veeam Backup and Replication worker issue

    post your /etc/pve/storage.cfg Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
  6. bbgeek17

    LCAP LAG Not Working

    Hi @Bones558 , welcome to the forum. There could be multiple issues that affect your connectivity. For example, you have two interfaces on the same network segment. This will lead to confusion and unpredictable results, like the ones you are experiencing now. Your LACP mode is different...
  7. bbgeek17

    How to configure HA to shut down specific VMs before migration in Proxmox VE 9.1?

    That really defeats the primary goal of the HA subsystem. Plus it would only work in a managed migration. As you can imagine on node failure there will be no way to either shutdown or live-migrate the VM. If you are only looking to address managed/manual migration - you likely need to create a...
  8. bbgeek17

    Proxmox and Veeam Backup and Replication worker issue

    Hi @acsinc , welcome to the forum. Veeam is a partner of PVE and they theoretically have access to PVE support to assist with common customer issues. However, you seem to be a Veeam only customer at this point. You may benefit from the help from a PVE implementation partner - they can usually...
  9. bbgeek17

    Suddenly unable to access web UI

    Have you tried 127.0.0.1 ? It would be helpful if you posted your configuration and commands you run here, rather than just reporting the results. The output of these commands in text format and encoded with CODE tags is a good start: pveversion -v uname -a uptime systemctl list-units --failed...
  10. bbgeek17

    Opt-in Linux 7.0 Kernel for Proxmox VE 9 available on test

    Great news! We will get it into our automated testing asap! Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
  11. bbgeek17

    Iscsi for Virtual Machine Storage in Proxmox

    Sounds like they resolved the problem. I agree that their original issue was likely network configuration related, perhaps MTU was misconfigured. Your iSCSI storage is a "storage pool" of iSCSI type in PVE speak. The LVM storage is the LVM storage pool. Happy to hear that you have it under...
  12. bbgeek17

    Iscsi for Virtual Machine Storage in Proxmox

    Hi @larryd, welcome to the forum. Its hard to provide ideas as there is not enough technical information in your post. Answering the following questions may assist members to provide some guidance: - What type of storage are you using? (Vendor, model) - What type of connectivity? - What is the...
  13. bbgeek17

    Still can't delete vm disk

    Hello @br8k , welcome to the forum. When you have an orphan disk, it should appear as "unusedX" in the VM's hardware configuration after doing "qm disk rescan". If it did not, there are more things to look at. One possibility - NFS is no longer marked as Image-carrying storage. That said, a...
  14. bbgeek17

    Issue with Deleting VMs in proxmox environment/ Use luns directly?

    It should be fine to tick one and untick the other. No warranty, express or implied, regarding the results :-) If you need a more deterministic answer - purchasing a support subscription and opening the case with your Hypervisor vendor is the way to go. Cheers Blockbridge : Ultra low latency...
  15. bbgeek17

    Issue with Deleting VMs in proxmox environment/ Use luns directly?

    For zeroing out the volumes you will want to change this to 1. I am not sure what you want to uncheck about the LUN access. The iSCSI storage is already set to "content image", that should be sufficient. Blockbridge : Ultra low latency all-NVME shared storage for Proxmox -...
  16. bbgeek17

    Issue with Deleting VMs in proxmox environment/ Use luns directly?

    What is the context of your /etc/pve/storage.cfg? Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
  17. bbgeek17

    Issue with Deleting VMs in proxmox environment/ Use luns directly?

    It is hard to say what the risk profile is. You would need to examine what this storage is, what portion of it is used and in what way. If nothing is actually using the storage or Direct LUNs, then unchecking it will not hurt. A Direct LUN means that you took an iSCSI LUN and passed it through...
  18. bbgeek17

    Issue with Deleting VMs in proxmox environment/ Use luns directly?

    You should not Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
  19. bbgeek17

    Issue with Deleting VMs in proxmox environment/ Use luns directly?

    The man page states: --saferemove <boolean> Zero-out data when removing LVs. It zero's out the blocks which were occupied by a particular LV, not the entire physical disk. Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
  20. bbgeek17

    Used disk on new VM

    https://forum.proxmox.com/threads/issue-with-deleting-vms-in-proxmox-environment-use-luns-directly.182261/#post-846221 Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox