Not only that, but if there are dead DM devices, the lvs, pvs and other scan commands used by PVE will hang. This in turn will cause stats daemon to hang.
Having dead devices on the system will lead to unpredictable instability.
Blockbridge ...
Hi @dindon_tv , welcome to the forum.
If the steps are executed properly, you should not loose the data.
There are many threads on the forum about this, for example:
https://forum.proxmox.com/threads/increase-local-lvm-and-vm-disk-size.121257/...
Normally when using Shared LVM the LV for a particular VM is only active on the node which owns the VM. You should not see a "dev" link for that LVM.
From your output, you are using the new QCOW/LVM technology. Keep in mind that it still has...
Hi @Budgreg , welcome to the forum.
Is this how you would describe a system in a ticket you open with Netapp? :-) What does it mean for software to be irritated? :-)
This just removes a pool definition from PVE, the OS/Kernel are still very...
You are welcome, happy we could help.
@johannes is correct. The main reason we haven't updated the lvm-shared-storage KB article is that the lvm snapshot bits are not yet ready for production. A considerable amount of focused development and...
Hi @RonRegazzi , welcome to the forum.
Most of your questions regarding best practices should be addressed in your storage vendor's documentation. Many storage vendors recommend using multiple subnets as best practices.
That said, you may find...
@UdoB You are 100% correct and thanks for calling it out! Tables without units are incomplete. I've updated the table headers to clarify appropriate units.
Cheers
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox -...
It is Perl. It is what PVE does to health check NFS. You can look at two CMD= lines and run those commands manually to see if they fail. Then you can troubleshoot why they fail.
You should try testing that it is actually working by doing a large...
My first impression of the information you provided - you have a network issue. Perhaps an MTU mismatch.
Note that PVE health checks for NFS consists of RPC probing (showmount , rpcinfo). Those often use UDP.
My next step is to use those...
If I remember correctly, the official answer is: it should work, but it’s not guaranteed - nobody explicitly tests upgrades between every possible combination of minor in-family releases.
That said, the Proxmox team takes great care to avoid...
Hi @akulbe,
What happens when you execute : pvesm status
Are you able to "ls"/access the /mnt/pve/VM-Linux and do you see the data there?
There is more involved in PVE/NFS relationship than port 2049.
Blockbridge : Ultra low latency all-NVME...
Hi @pmvemf,
Following up on this, I asked our performance team to review the kernel iSCSI stack (what you referred to as the "Proxmox native initiator," which is in fact the standard Linux initiator).
Our testing with PVE9 showed no functional...
Hi @DJohneys , welcome to the forum.
This is not PVE specific but rather standard Linux admistration. There are many ways to do what you want:
echo 'export http_proxy="http://proxy.example.com:8080"' >> ~/.bashrc
echo 'export...
Hi @HaVecko,
I don’t have any concrete suggestions for you, mainly because we simply don’t know what happened, only a theory.
As a reminder, I’ve never worked with your particular backup vendor. It’s entirely possible that the whole theory is...
Hi @mosaab , welcome to the forum.
I am guessing you had a PVE8 cluster and now added a PVE9 node?
Search for "gluster" on this page: https://pve.proxmox.com/wiki/Roadmap
Follow on with reading...
With this requirement ^ , do what google suggested, which seems to be to install CIFS server and share the location via CIFS
Cheers
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
With the above additional information I am even more convienced that there was likely a Qemu snapshot/filter/staging that was lost (not replayed) on reboot. It is not surprising that your windows server got unsynced when its data went back, its a...
I am a little confused about what data was lost - actual data inside VM, ie files/databases/updates, or monitoring stats from external service?
You should probably invest some time to understand how this critical process works and how it affects...
In hindsight, you should have skipped the HBA and SAS and gone with NVMe.
When choosing between ZFS and Ceph, keep in mind that ZFS is a local filesystem, while Ceph is a form of distributed/shared storage. Each comes with its own set of pros...
You have a network problem. When investigating a network issue, there are three major components to consider: the Server (NFS), the Intermediate devices (NIC, cable, switch), and the Client (PVE).
PVE performs frequent health checks on NFS...