Hi am running a 3-node proxmox/ceph cluster which is sheduled for an upgrade to the current PVE version, which as of today, is 7.3.
The plan was to perform a "rolling" upgrade, starting with upgrading one node at first, and then after a week, also upgrade the remaining two nodes.
When...
Hi @Dunuin , thanks for your help! Sounds very interesting…
So, like putting two low power intel NUC or any other (more) energy efficient devices running PVE in the rack and add them to the cluster, for quorum only? Cool! :)
Having five nodes again would allow me to temporarily shut down the...
Hi! I currently operate a production PVE7 cluster of three nodes for an internal development team. Due to energy saving demands my boss wants to run all VMs/CTs on one cluster node and temporarily shut down the two then empty nodes. As each of the nodes has plenty of RAM/ressources and we...
Okay, yeah, makes definitely a lot of sense to have vanishing off by default. So I just thought about mirroring the two PBS' which on would achieve with the vanish option. But normaly I can achieve different retention shedules if needed with leaving vanishing off and implementing a different...
Hi Matthias and thanks for your quick help!
As the first PBS still looks fine - I would delete the .chunks folders content, then set the vanish option on the sync job (that I totally missed) and finally do a resync or wait for the next sheduled run.
So if the the vanish option is set I would...
Hi, I have two PBS servers in my setup.
The primary one is ok and working as expected.
The second PBS has a sync job to basically clone the content from the first PBS. Thats all it does. Well, apparently I didnt understood that te datastore on the second pbs needed its own GC/Prune job config -...
Hi and sorry for adding in here with a question and no answer. Did you finally found a solution to this issue? Or did you stay on iSCSI?
I am currently experiencing the exact same behavior on a somewhat similar setup. Windows disk usage nearly always at around 100% on an all Flash cluster...
Thanks a lot @aaron this helps!,
will 1st) set the vm to "VirtIO SCSI single" and then activate "IO thread" on the corresponding SCSI device for this VM. Besides I`ll try to set cache from "Default" to "Write back" on this disk.
then 2nd) look for abnormalities in ceph tell osd.* bench.
Is...
Looking at the "wa" colum n from top I'd say there is no I/O wait on the host. Is top okay for checking for IO delay on the PVE host? Or what yould you suggest? iostat?
There is currently no backup running.
The guest is mostly above several hundreds or thounds milisseconds latency for a few...
Hi, I am currently operating a 3 node PVE 7.0 & Ceph (cluster based on 10*NVMe/ 10*SATA-SSDs in each node). The CEPH network is a dedicated 100GbEthernet link.
In Windows guests the disk perfomance is maxing out at a 100% disk usage and the latency is at multiple hundred up to several thousand...
HI, totally new to this "timeout waiting on systemd" thing. Today I made a snapshot and then a manually backup of a Windows VM (ID 229) (with PCIe passthrough) to a PBS server and the backup failed. After that the machine wont start again with error...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.