Better and continuously maintained documentation.
Continuous automated testing. Also you make it sound like somebody is suggesting implementing orbital guidance.
Is that's why vm vs ct shutdown varies greatly in reliability in web gui vs cli ? is that why some have scripts walking the nodes...
That workaround is really suboptimal since fast storage nodes will be slow down by a slowest nodes in cluster. Why I would slow down backups to 50MBps if some nodes hapille can handle 4GBps ? and still have the slow nodes suffer.
If people want different compressions - let’s give them that, BUT i would still strongly opt for NO Compression - since pbs can get hammered down with decompression tasks, and in many cases that significant slow down for maybe 5% space saving is pointless.
Forgot to add:
“through GUI”
Also when I tried manually setting it in config corosync would often ignore / remove this part. At the moment, only reliable was is to assign large amount of votes to one node that is always on, but when this one goes for down for updates, the cluster goes down.
Hi,
After some testing and feedback from proxmox staff, I would suggest:
1. reenable change of compression level during backup to PBS (now it's locked to ZSTD, but in the past it allowed uncompressed). At this point forcing compressions can make CPU bottleneck and also increase load on the node...
Hi,
It would be nice to allow admins of a cluster to define either:
- a minimum votes required for a quorum (yes, admins might be aware of pitfalls and are prepared to deal with consequences)
OR
- define behaviour when quorum is not achieved - current "stop everything and just sit there" is a...
First @Dunuin, thanks for confirming that. Second, that sounds sub optimal, if backing up CT (where I assume PVE and PBS is aware of internal file structure) there could be an master hash for a file, stored somewhere - and where backup is being performed, PBS only needs to garnish PVE with list...
That is maybe a one VM in prod. Everything else is filled out with files starting at one gig. And test CT / VM was filled with similar size files. Bottom line is my prod doesn't compress or dedup (all pools have compression disabled because there is no point for it). I know that my prod might be...
No I didn't because I don't test it now on anything close to production hardware ... but it looks like soon I'll have to zone out a rack and start real testing (at least now I now that CT's have some backup limitations, but will need to compare production performance vs vm's).
What I have...
I'll give you that. And also I'm starting to see why.
Anyway, back to the backup. I still believe that backup process is cpu bottle necked (even if it's not compression / decompression). It's visible on scrap test setup - makes me wonder how much horse power a pbs server will need when in...
OK, this part was not explicit @fabian , and I would say it's rather important detail (but again, it's maybe me being dumb and blind to miss it).
Side note, from further investigation: It seems that there is lot's of cpu overhead on compressing / decompressing data on the fly while doing...
Just to add, the CT 113, is a turnkey fileserver template based, that has a cron job that automatically updates packages, that is the only difference that is between each of those backups. Most recent backup job, can't even scroll through the window because there are so many lines about the...
So initial feedback is this: I tested you theory, and when I trigger the group verification of whole CT tree (image below) the PBS STILL tries to verify the whole snapshot - I'm right now an hour into it trying to verify the newest unverified snapshot - I would assume that it should try...
And this is what I meant by "enveloping different logic within switch case" - it could be made NOT fs agnostic. But I guess, Proxmox folk would need to weight customer benefit of performance vs extra work of implementing and maintaining this. And no, after 20 years of software engineering I'm...
Yes, you are absolutelly right. THOU, you (ie, operating system of proxmox) already knows what is the filesystem for underlying storage, because I select it from drop down in the gui - ZFS. There can be a separate logic per storage type, it's not that much complicated to encapsulate different...
So I'm in the same boat. Before implementing PBS for production, I deployed it in test cluster. Test LXC (4tb) takes 2 hours to backup, while nothing changed. The strange thing is that durign the 2 hour that takes the backup to finish, there isn't much of a disk activity on PBS or machine with...
Sorry Fabian, I've found exactly this answer in different topic. If admin would kindly delete this thread to not clog the forum it would be nice.
BTW, thanks for rapid reply.
Funnily enough I was googling for aws stuff and found my old topic ... so to the original detractors making "remarks" - I guess creation of PBS proven YOU wrong :*
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.