Hi @nleistad,
Confidently attributing the issue to NFS, QCOW, ZFS, network, or any other component requires proper analysis. This typically includes log review, reproducible testing, and potentially network trace reading.
There are known edge...
Hi @Vladyslav ,
There is no documentation, as far as I am aware, that can guide you to use a non-root account. That said, you can use a tool (for example "govc") that uses the same network/API path as PVE and troubleshoot the connectivity easier...
Hi @Eric Thornton , welcome to the forum.
MTU size is not tied to network speed. You can use non-standard MTU values on 1 Gbit just as well as on 25 Gbit or higher links.
The key point is consistency: all devices participating in the same...
I recommend that you figure out a curl based way to upload a file to local storage with the same account that Veeam is using. Run it local to PVE first, if that works - run it from the Veeam network segment. If that works, convert it to PS...
Hi @Bones558 , welcome to the forum.
There could be multiple issues that affect your connectivity. For example, you have two interfaces on the same network segment. This will lead to confusion and unpredictable results, like the ones you are...
That really defeats the primary goal of the HA subsystem. Plus it would only work in a managed migration. As you can imagine on node failure there will be no way to either shutdown or live-migrate the VM. If you are only looking to address...
Hi @acsinc , welcome to the forum.
Veeam is a partner of PVE and they theoretically have access to PVE support to assist with common customer issues. However, you seem to be a Veeam only customer at this point. You may benefit from the help from...
Have you tried 127.0.0.1 ?
It would be helpful if you posted your configuration and commands you run here, rather than just reporting the results.
The output of these commands in text format and encoded with CODE tags is a good start:
pveversion...
Great news! We will get it into our automated testing asap!
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Sounds like they resolved the problem. I agree that their original issue was likely network configuration related, perhaps MTU was misconfigured.
Your iSCSI storage is a "storage pool" of iSCSI type in PVE speak. The LVM storage is the LVM...
Hi @larryd, welcome to the forum.
Its hard to provide ideas as there is not enough technical information in your post. Answering the following questions may assist members to provide some guidance:
- What type of storage are you using? (Vendor...
Hello @br8k , welcome to the forum.
When you have an orphan disk, it should appear as "unusedX" in the VM's hardware configuration after doing "qm disk rescan". If it did not, there are more things to look at. One possibility - NFS is no longer...
It should be fine to tick one and untick the other.
No warranty, express or implied, regarding the results :-) If you need a more deterministic answer - purchasing a support subscription and opening the case with your Hypervisor vendor is the...
For zeroing out the volumes you will want to change this to 1.
I am not sure what you want to uncheck about the LUN access. The iSCSI storage is already set to "content image", that should be sufficient.
Blockbridge : Ultra low latency...
What is the context of your /etc/pve/storage.cfg?
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
It is hard to say what the risk profile is. You would need to examine what this storage is, what portion of it is used and in what way.
If nothing is actually using the storage or Direct LUNs, then unchecking it will not hurt. A Direct LUN means...
The man page states:
--saferemove <boolean>
Zero-out data when removing LVs.
It zero's out the blocks which were occupied by a particular LV, not the entire physical disk.
Blockbridge : Ultra low latency all-NVME shared storage for...