LVM is a PVE integrated way to use FC as shared storage. You can read this article to get high level understanding of the components involved:
https://kb.blockbridge.com/technote/proxmox-lvm-shared-storage/
Although it references iSCSI as...
Hi @ProlyX , can you clarify "that" in your question?
ZFS is not multi-initiator/multi-writer compatible.
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Yes, lvm/thin is not cluster-aware so you will risk a loss of data. It's not supported for a reason, see also the table on https://pve.proxmox.com/wiki/Storage which clearly states that lvm/thin is NOT a shared storage.
No for exactly the...
@bbgeek17 wrote two pieces which should cover your questions:
https://kb.blockbridge.com/technote/proxmox-lvm-shared-storage/
https://kb.blockbridge.com/technote/proxmox-qcow-snapshots-on-lvm/
Basically snapshot on lvm/thick wasn't supported...
Hi @unsichtbarre ,
My guess is that the majority of people will never need this package, so there is no reason to create dependencies, or install it by default. Otherwise, it will be part of initramfs management on all installs.
Blockbridge ...
The snapshot support sits above the FC/Multipath layers. It requires LVM to be placed on top of the Multipath device, as well as, snapshot-as-volume-chain 1 attributed on the storage pool. This attribute is present on the referenced page, however...
Thank you for the update @Elleni . There could be many possibilities, for example: https://forum.proxmox.com/threads/vms-loosing-network.180650/#post-838011
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox -...
Thank you for the update @naps1saps . You can mark the thread as SOLVED by editing the first post and updating the subject prefix.
Best of luck in your endeavors.
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox -...
Hi @Elleni,
The first step is to go through standard Linux network troubleshooting. Capture the network state when everything is functioning correctly, then capture it again when the network is "broken". Compare the two states.
Access the...
It will skip the mounted paths and it will not show the space occupied by the directory in pre-mounted state. One can make some inference regarding the space usage but the safest method is to unmount the external device and check again...
What version of PVE are you running on each node?
In PVE9 most of the database has moved to /var/lib/iscsi
How are you managing iSCSI sessions? Are you using PVE storage pool or manual configuration?
If former, is your cluster healthy? If later...
That's great news! Love it when root cause is found regardless of what it's!
Thank you for the update!
Cheers,
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Were windows Event logs checked? Can the VM ping/access another VM located within the same VLAN and same hypervisor?
Perhaps you should have rolling network capture, or be prepared to start one at various points in the network (hypervisor...
Hi @Drunkm0nk , welcome to the forum.
It does not seem probable that an upgrade of BIND9 package installed on a hypervisor would affect intermittent connectivity issue on random VMs.
You have not provided nearly sufficient information to get...
It does sound like an MTU issue. In addition to 28 byte IP header size, there is also a 4 byte VLAN header that may be playing a role here.
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Great, thanks for sharing.
You can mark the thread as solved by editing the first post and selecting an appropriate subject prefix
Cheers
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
I figured it out. There was a symlink missing under /etc/systemd/system/multi-user.target.wants for mnt-pve-dir1.mount Once this was created, dir1 shows up after a reboot.
Then check why the disk is not mounted on boot, if indeed it is not mounted:
journalctl -u mnt-pve-dir1.mount
systemctl status mnt-pve-dir1.mount
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox -...
You should check and match /etc/fstab from old nodes to new node.
The filesystem will not be mounted by PVE, it has to be mounted already when PVE starts.
If you are mounting through Systemd, you should check the log for any errors related to...