Are you able to check the logs of your S3 providing software? It will likely have additional information on what failed.
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Hi @adrian-1030, welcome to the forum.
What you’re running is a fairly complex nested virtualization and networking setup, and it’s unlikely this is a Proxmox VE issue. If I understand correctly, your topology is:
Fedora (Wi-Fi) > VMware...
Hi @vluxh ,
PDM runs on top of a Debian Linux. The general Debian Linux VM recommendations apply to PDM build.
We've done some performance analyses and our finding may be helpful to you...
Note that a patch has been proposed to improve this behaviour:
https://bugzilla.proxmox.com/show_bug.cgi?id=3229#c7
It has not been accepted yet, but one could manually apply it to a system if there is an urgent need.
Cheers
Blockbridge ...
You should be looking at, following and checking the instructions on page 355
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
You are welcome.
The error you were receiving was somewhat generic. The tool was upset about the label/signature, not a partition:
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Give "wipefs -a /dev/mapper/3690b11c0000238a20000030e5098c67b" a try.
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Have you yet retried: pvcreate /dev/disk/by-id/dm-uuid-mpath-3690b11c0000238a20000030e5098c67b ?
Have you rebooted the server since detecting the error?
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox -...
Can you run these commands on the Multipath device (via ID or dm-10 path):
fdisk -l /dev/mapper/3690b11c0000238a20000030e5098c67b
blkid /dev/mapper/3690b11c0000238a20000030e5098c67b
lsblk dev/mapper/3690b11c0000238a20000030e5098c67b...
The output shows 8 paths to the LUN, is this what you expected?
What does "multipath -ll" show? What about "fdisk -l [/mpath_device]" ?
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
It sounds like OP may be using a variant of ZFS/iSCSI, perhaps with a custom plugin for TrueNAS.
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
If you need automatic migration, i.e. HA functionality - the storage must be named the same. There is, currently, no storage mappings with HA.
If you are doing manual migration, you can specify target storage in CLI/API (not in PVE UI). I believe...
Hi @Hemi ,
Since you’ve confirmed that multicast traffic is being received on both the hypervisor and inside the VM, this is most likely a statistics-reporting discrepancy between the Virtio driver and the Linux kernel.
I don’t have guidance on...
You are correct regarding WWID. I will see myself off to get more coffee. In the meantime, I have updated my reply in #3 and deleted subsequent one.
Cheers
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox -...
@jsterr , have you checked our guide : https://kb.blockbridge.com/technote/proxmox-lvm-shared-storage/
Is there anything you feel is missing or unclear?
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox -...
If over time you have tried multiple experiments, the letter based device may already have been used by something. Kernel will allocate new device on each node independently. The disk signature will be identical across the nodes, so the letter is...
I know this thread is existing for long time , but I came across today as I needed to list all VMs in our cluster and I couldn't find the correct command , so I am sharing the command here
command pvesh get cluster/resources will list all...
The screenshot shows that PVE3/ssy-storage has question mark, i.e. not active/configured. Could this be related?
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Hi @Tom123 ,
Most likely the SAN is not configured correctly for what you are trying to achieve. In practice, this usually means the LUN is not mapped to all required target portals. If the kernel can establish an iSCSI session but no block...