If you are looking for stable networking you should avoid Broadcom. Ideally, you'd get Mellanox.
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
A quick way to look at the file without all the extensive comments:
grep -vE '^\s*(#|$)' /etc/lvm/lvm.conf
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
:-) AI spell check
PS although it would be very interesting if PDM allowed 3rd party Plugins similar to Storage subsystem
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Veeam recovery, to my understanding, can be performed “online,” meaning the VMs become available as soon as the recovery process is initiated. Proxmox Backup Server (PBS) definitely supports this capability.
That said, if IP addressing changes...
So to clarify, the multipath_component_detection is 1 by default. Somehow you ended up setting it to zero, leading to a now logical error.
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Usually you would use LVM/thick on a ISCSI or fibre-attached SAN: https://pve.proxmox.com/wiki/Storage:_LVM
Before PVE9 you couldn't use snapshots on it though. Beginning with PVE9 a new feature was introduced which allows...
Since you have access to the VM, can you massage it into getting DHCP? Continue troubleshooting the VM from the VM console.
Stop the networking, restart, adjust the config, etc. Find the working configuration - that may give you a clue as to why...
Thank you for the update @Juan Ortiz
You may want to mark this thread as solved to help keep the forum tidy. You can do so by editing the first post in the thread and selecting the appropriate prefix at subject dropdown
Cheers
Blockbridge ...
Looks like your IP has been blacklisted by Debian distribution servers for some reason. Give it 24-48 hours, or try to switch to another mirror.
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox -...
You made some valid statements, but I don't see their relevance to @jsterr's thread
He has 2 LUNs, that appear to be seen properly by Multipath, despite somewhat suspicious base iSCSI configuration. He ran direct LVM tool on the LUN that...
Hey @palladin9479 , I am happy to see that our article on PVE/LVM and Shared storage was helpful in your understanding!
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
I don't think Op ever said they would use Ceph. Its the opposite - they specifically said that they are not using Ceph, probably because a common advice in general (especially outside of this forum) is to use Ceph.
Blockbridge : Ultra low...
Hi @palladin9479 , welcome to the forum.
It is unlikely that @jsterr is/was planning to use fstab/filesystem with his PVE Cluster in shared storage configuration.
Other than that, your advice seems to agree with my suggestion of using iscsiadm...
Hi @admin55,
Congratulations on making the move. Either option is fine, frankly. Both protocols have been around for decades. It depends on your requirements, management familiarity, etc. Why not try both? Just create two storage pools and see...
Hi @bratac ,
Since your post contains references to "Allow Snapshots as Volume-Chain" and your disk references /dev, it suggests that you are using a SAN of some sort.
The ASVC feature requires use of LVM, so you must have an LVM volume left...
"wipefs -a" one of the devices in the group, if you still cant access mpath device. Remove the iSCSI storage pools, remove any nodes/sessions with iscsiadm, reboot the node, optionally remove/re-init the LUNs on SAN side.
run "vgcreate" with...
Yes, the PVE server can run as a VM in another hypervisor (PVE, ESXi, HyperV, etc). It does not even need to be a PVE server. It can be a Debian VM, and has very low specs. The QDevice is described here...
Hi @ManuS , welcome to the forum.
The supported/recommended configuration is to have equal number of nodes in each site with additional node in a third site.
With your proposed configuration, if SiteA(4node) fails, the SiteB(3node) will not have...
Hi @baalkor,
I understand the recommendation in the Red Hat documentation, but the right choice really depends on two factors: what you're optimizing for (latency, IOPS, or bandwidth) and how fast your iSCSI SAN actually is.
In general, with...
Thank you for the update @adrian-1030
You can mark this thread as SOLVED by editing the first post and selecting appropriate option from the dropdown near the subject
Cheers.
Blockbridge : Ultra low latency all-NVME shared storage for...