Hi @Kurgan,
Thanks for the great question. LVM is considerably more sophisticated than QEMU/QCOW, though I'm not an expert in its internal architecture. My assumption is that it uses a mix of demand-based and timer-based flushing. I'll try to...
@wowo - it is very likely that the GUI is running just fine. It is also likely that Op did not use appropriate IP address for their LAN, probably following one of the generic tutorials.
There are also a few more likely possibilities, all deal...
Hi @Lvt , welcome to the forum.
Here is a standard list of questions the forum members would want to know from someone in your situation:
What is the IP of the PVE server?
What is the IP of your workstation?
What is the IP of your gateway?
Can...
Hi @Jocky Wilson ,
You did not have VMs on the second disk. You only had VM disks there. Your VM configuration was stored on the disk that you "blew away".
The simplest way to recover is to create a new VMs with the same IDs as your old ones...
Hi,
a fix is available (currently in the pve-test repository) with libpve-storage-perl >= 9.1.0
libpve-storage-perl (9.1.0) trixie; urgency=medium
* import/export formats: fix regression for import/export of 'images'
content. This affected...
Hi @K1NG , welcome to the forum.
What happens when you do the following command on both nodes: pvesm list local
Are there any stale/duplicate disks where they should not be?
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox...
Rolling out a new feature set that impacts data handling is naturally a significant undertaking. It carries a different risk profile than, say, introducing a new GUI option. I imagine the team wants to ensure that a few remaining details are...
we know that package is deprecated, but it is a dependency of ifupdown2 for the time being, that's why we use it. it being deprecated does not prevent it being installed, so the issue of OP must have a different cause.
This is a hardware problem, not an OS-related one. In general, yes - you should update all firmware to the latest versions if they aren’t already.
It’s really something you should follow up on with Glotrends and Dell.
Blockbridge : Ultra low...
We're proud to present the next iteration of our Proxmox Virtual Environment platform. This new version 9.1 is the first point release since our major update and is dedicated to refinement.
This release is based on Debian 13.2 "Trixie" but we're...
Awesome news! We will get it into our official testing pipeline ASAP!
Great work!
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
This may be worth attention from PVE developers. The isc-dhcp-client package is marked as depreciated and is being replaced by another package name.
Perhaps, there is a difference in Debian repository context/updates
root@pve9r1-iscsi-host1:~#...
Hi @Wodel , there is no built-in, i.e. PVE native, way to do what you want.
You have to define what "looses connection" mean.
Is it physical link down?
Is it no icmp connectivity?
Is it no tcp connectivity?
Is it no tcp connectivity to local...
You may also want to re-read the guide you are following and pay particular attention to URL construction and port numbers specified there in the "Logging into Plex" section.
Cheers
Blockbridge : Ultra low latency all-NVME shared storage for...
Hi @PjotrBee , welcome to the forum
I am not familiar with this particular container and what it is expected to expose. That said, there are more steps for you to isolate the issue.
- Are you sure that https is exposed not http ? Have you tried...
PVE does not permit/support subdirectories in well-defined paths, such as "images".
Try to either move your images up, removing "0", or symlink them above 0. You should end up with images here:
'local:/optimus/lib/vz/images'
You should also be...
Just for the record the VM config sizing and changes are related to the PXMCFS https://pve.proxmox.com/wiki/Proxmox_Cluster_File_System_(pmxcfs)
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox -...
In your first post you mentioned that you are already using a direct 10 Gbit connection.
You reported that your network throughput tops out at ~100 Mbit, yet you also tested and confirmed that a network-only benchmark can achieve much higher...
You mentioned migration and vmware. So we can presume that you are doing ESX to PVE migration? If that guess is correct, then its safe to say that you are also using PVE ESXi migration wizard?
If the above is correct, you have to keep in mind...
There’s really no scenario where Proxmox would be involved in this issue. It’s not part of the path for what you’re troubleshooting.
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox