Hello @BvE,
your assumption is correct. After you set issue_discards = 1 in /etc/lvm/lvm.conf, you can apply the changes without a reboot by running vgchange --refresh.
Hello @bacbat32,
While @SteveITS is right that WAL/DB can contribute to the raw usage, the 1.8 TiB of STORED data seems to be caused by the ~900k objects that your initial rados -p ceph-hdd ls command showed. Since rbd ls is empty for that pool...
If it can help you, fter installation, I got this error:
CVE-2018-3646
So I followed this Post
by setting the mitigation=off in the file
The contents of the file now are this:
Hello @jnthans,
glad the plugin update worked for you.
Regarding the incorrect Hyper-V helper issue that @AndreasS mentioned: This usually points to the Backup Proxy configuration. It's worth checking which proxy is set for the Proxmox...
yes, exactly.
No changes made to the templates - but I don't get the html version with the normal message...
Tried it on several systems - there is no attachment with HTML...
Tobias
Ich hab es nicht erwartet und bereits einen Node mit 6.14 gebootet. Interessanterweise booten auch VMs mit "aio=io_uring" ohne augenscheinliche Fehler.
Ich teste ..
Wir haben jetzt eine Handvoll VMs dort laufen, vorsichtshalber mit dem...
The IP address on the loopback interface (in the FRR config) should be /32, not /24.
If that still doesn't work could you also post the output of the following commands:
vtysh -c 'show openfabric neighbor'
vtysh -c 'show openfabric interface'
I understand that when removing a VM (or container) from LVM-Thin, discards are not sent to the physical SSD storage. To enable this, issue_discards = 1 must be enabled in lvm.conf
Is this correct and is there an option to redeploy lvm.conf...
No. What I meant was, if you stop them and something doesn't work, then they were in use, but you can just start them again. Sort of a brute force approach, I know.
FWIW my thread on the separate WAL usage.
Hello @kobold81,
could you please post the configuration file of your VM? You can find it at /etc/pve/qemux-server/VMID.conf (replace VMID with the actual ID of your virtual machine). It's possible that the disk controller type (e.g., SATA...
Hello @bacbat32,
While @SteveITS is right that WAL/DB can contribute to the raw usage, the 1.8 TiB of STORED data seems to be caused by the ~900k objects that your initial rados -p ceph-hdd ls command showed. Since rbd ls is empty for that pool...
Hello
I confirm that i had the same problem while restoring a Veeam agent backup : failed to prepare disk for restore.
With the update of the Proxmox Plugin, it works now.
Thanks !
Hello @pvpaulo,
@shanreich is on the right track. The different configurations are almost certainly caused by the vlan-aware flag on your vmbr0 on node PVE01.
If that bridge is set as VLAN-aware in /etc/network/interfaces, SDN correctly omits...
You are using overlapping subnets across three interfaces and two different VLANs, which is asking for trouble. Use a separate subnet for VLAN 149 at least.
bond0 is most likely not coming up due to a typo in your ifupdown2 configuration (auto...
Danke für das schnelle und positive Feedback. Ich werde das entsprechend testen, evtl eben bereits morgen.
Script mit `qm set` klingt vielversprechend, ich sehe mich dann mal dazu um. Danke.
https://community.veeam.com/blogs-and-podcasts-57/proxmox-ve-9-0-now-supported-by-veeam-upgrade-steps-11627
did you install this?
with this i mean the updated plugin from veeam.
prior to this version pve9 is not supported on veeam
Hello @alexandere,
the method shown by @Moayad using pct set <CTID> --nameserver <IP> is the correct way to make the DNS setting permanent. Normally, the DNS server should be automatically adopted from the host settings during the container's...