So, just to make sure I understand correctly: this issue occurs because the VM's write speed exceeds the PBS upload speed, and the "copy-before-write" mechanism forces the guest IO to wait until the data is sent to PBS, right?
I found the...
Hi,
PBS backups are performed by a copy before write principle for data consistency. That means that writes to a block on the block device during backup are first written to the backup, only then to the block device. For VMs with high write load...
Hallo,
ich habe jetzt die Version 0.1.271 installiert. Nach mehrmaligem Reparieren und Neustart der Maschine wird im Dashboard erkannt, dass der Agent installiert ist. Vielen Dank für die Hilfe.
If you are still struggling with the direct import wizard or encountering "failed to stat" errors, you might want to try a more traditional but highly robust method.
The most reliable approach, especially for very large VMs or when the network...
Before committing, I would first run:
qm importovf <new_vmid> <path_to_ovf_file> <target_storage> --dryrun
to check that the OVF manifest is correctly populated. This is advisable as different systems produce different manifests.
If you are still struggling with the direct import wizard or encountering "failed to stat" errors, you might want to try a more traditional but highly robust method.
The most reliable approach, especially for very large VMs or when the network...
That's a very fair point. While adding a full PVE node might offer the minor convenience of seeing the arbiter's status directly in the WebUI, the administrative overhead and potential complexities you mentioned—especially regarding storage...
Thank you so much for sharing your experience—this is truly valuable. It helps me better understand the lower limits of what a Q-device requires to function effectively.
Your insights further confirm how flexible the Q-device setup can be...
For Windows to properly report used memory to PVE you need:
- Virtio Balloon driver and service installed and running (installed and enabled automatically with the VirtIO ISO installer). Don't confuse this with QEMU Agent, which does different...
I have a k=4 m=2 erasure code pool on a single host with 6 - 6 tb SAS drives in a Dell R730. I have a NVME drive (Intel DC P4510) split up into DB and WAL for each OSD. I am using this pool with CephFS. Using iostat, I am not seeing any reads...
It is an appropriate way to achieve your goal.
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
I had tested latency half a year ago: https://forum.proxmox.com/threads/qdevice-deployment.173182/post-805735
From my experiment it seems that a hundred milliseconds are tolerable.
Adding a Hyper-V–based PVE host just for quorum is short-sighted. You are introducing additional, unnecessary, management overhead:
Storage pools will now need to be restricted to specific nodes.
If you add NAS storage later and do not restrict...
HTTP 400 wäre n Bad Request... geht der Outbound Internettraffic direkt oder über einen Proxy? Ist der Webhhook n GET oder POST?
Man könnte mit nem TCP-Dump evtl mitschneiden was da genau passiert...
Ich gab das noch gefunden... vlt die...
Thanks for your points. While GFS2 technically works, I would not recommend it for production use. This setup was only a PoC to demonstrate what could be achieved.
We also added PBS as a qdevice to provide proper quorum handling for HA.
For...