Hmm, ich habe einen gewissen Trackrecord im Updaten von Proxmox Clustern, tlw über mehrere Versionen, wenn ich beim Kunden verwahrloste ungepflegte Systeme updaten musste. Von daher mal ein bisschen best practice:
- Wenn man sich Streng an die extrem gute Dokumentation im Proxmox wiki hält kann...
A Tape drive won't hurt, right? it's not meant to replace disk2disk backups. It's meant to have a full copy of your data (like maybe weekly) you can physically make inaccessible and really offline (and optionally relocate it a secure vault) in case something goes terribly wrong.
Having the...
hi,
I need some Input for troubleshooting a performance Issue I have on a Proxmox Backup Server I run.
Basically it's an off-site System that syncs with a production PBS. The Specs as follows:
- Supermicro Board + Chasssis
- Dual E-2680v4 Xeon (2x 14c)
- 128GB RAM
- 2x 480gb Seagate Nytro SSD...
Hi,
question to the devs:
Since the inception of PBS the hook script for vzdump got a bit useless. I want to use it to extract some backup performance and success metrics and put it in my timeseries. DB.
How can I retrieve things like Success, size and so on when using a proxmox backup...
Thought i want to share this, it might be helpful for someone:
In case you want to be alerted if you have a guest running without guest-agent:
https://github.com/lephisto/check-ga
This comes helpful if you want to make sure that backups are consistent. Sometimes a GA crashes of someone...
Hi,
it is still unclear to me, which versions of the qemu-ga have this option. Neither an installer of the latest(?) version asks me for this, nor do i have the Registry key. Can someone clarify please?
From my understanding for Snapshot Backups of the underlying blockdevice vss-copy ist...
I suspected this might be required and indeed:
Recreating osd.3 and osd.10 solved my whole issue. As it turns out here and there OSD's seem to be somehow corrupted when getting internal structures converted on the upgrade to pacific. Still - i couldn't detect anything wrong on the logfiles...
I think i am narrowing the problem down.
On a Production cluster I have the effect that also with the old WPQ scheduler I have a huge performance impact - not as bad, but still not funny - on a few osd's in the Cluster;:
osd commit_latency(ms) apply_latency(ms)
13 0...
Okay a report of my progress:
After trying to finetune on the OPS weights and limits... it just does not work for me. Going back to the old scheduler solved the thing for me and everything works like charme again. I know this will be deprecated in the future. I will continue trying to figure...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.