I'm trying to find a maxthreads or maxconnections value in the code, but I'm failing to. @fabian or @Thomas Lamprecht , could it be a MaxClients-kinda limit?
Oh nice!
What is the pricepoint you are aiming at?
And do you offer the service for individuals aswell?
so something was wrong locally on the PBS and at this time it was idle...right ?
I just tried the Tuxis PBS and ran into kinda the same issue. pvestatd "stops working" until a service restart.
Also I had to kill all proxmox-backup-client processes to get by PVE instance work like it should again, because the Backup Job could not be stopped by the GUI.
No VM was crashing or having other issues tho - it was just the UI getting messed up when having the PBS storage connected.
The PVE is located in a datacenter with 1 GBit/s uplink. Transmission speed was ~10-15MB/s (MB/s - not Mbit/s!).
The first 3 little LXCs run through the backup successfully, tho.
I'll setup an own PBS instance later to test if that works "better" in terms of connectivity.
Then I'll checkout if I can use Tuxis' PBS as Remote without those issues.
Anyways, great offer @tuxis! And great work from the Proxmox Team!
My last successful backup ended at " INFO: End Time: 2020-09-28T15:02:37+02:00 " according to the Backup Job Log - after that the problems started. I initially thought it was because the VM was some GBs in size (~7GB) while the 3 LXCs before were all < 1GB.
Tuxis PBS shows all 4 Backups als valid tho.
The 500gb is only while the product is in beta, right? So after the beta, everyone that helped get your product stable and to market will need to pay for the service?
Will the pricing be the same after the Beta?As with most commercial companies, we try to make money, so yes. Everybody that helped get the Proxmox product stable has also been able to use free backupservices for a few months. If they feel it's worth their while is up to them.
I think both ways should be possible.I just created bug 3044 for syncing to a remote via push. I think that makes sense, not sure how Proxmox feels about that though.
We could test if it improves restore times.I'll add a cache-disk too today, but I don't expect that to be very usefull since the data is not being read very often.
Will the pricing be the same after the Beta?
I read something about 15€/TB monthly which sounds okay to me, since the incremental backups and data dedup would save lots of backup storage anyways.
I think both ways should be possible.
Pull is great because the primary PBS does not need to know the credentials of the offsite PBS for example, which would harden the offsite PBS against an attack, should the primary PBS get compromised.
Push is great if you cannot or just dont want to trust the Remote PBS to have credentials of the other PBS instance. Or if you cannot or don't want to expose your PBS instance to the public Internet.
We could test if it improves restore times.
Also I didn't experience those issues with pvestatd this time - also the storage overview worked flawlessy - everything was shown and felt "snappy".
Will test further backups and of course some restores next days...
@tuxis
Sadly I wasn't able to add my PBS as a Remote to pull backups from my PBS to yours.
I did not explore the whole permission set of PBS yet, but i assume this is intentional?
Oh and would you mind to share the hardware specs of your PBS instance?
Im curious what kind of scale is required to keep up with the load of a public freebie-campaign.
Ex.: 50 Testers x 500GBs = 25TB Storage required... plus RAID penalty (parity disks) - let alone the CPU/RAM resources required to allow that amount of PVE servers to send backups (in parallel ) to the PBS.
So, your storage is not local or at least "nearby" like a SAN/NAS in the same Rack or DC.- 2TB Ceph RBD disk on dadup (Spinning disks, 25km away)
Since there is not much storage in use yet, could you try adding some directly attached HDDs or some local iSCSI/NFS as datastore to compare the performance?There is currently 1TB of backups in the pool.
So, your storage is not local or at least "nearby" like a SAN/NAS in the same Rack or DC.
Is that storage connected over a WAN link or do you have a darkfiber? Or to ask different: Whats the bandwidth its connected with?
I only did a quick readup about your daDup service, so I don't really know what kind of performance to expect there, I'm just wondering if the latency of the remote storage is simply too much for an IO intensive task like doing backups.
May fit for cold storage of already finished backups but what about fresh backups being live written?
Since there is not much storage in use yet, could you try adding some directly attached HDDs or some local iSCSI/NFS as datastore to compare the performance?
From what i read in other posts so far, Proxmox seems to recommend local ZFS storage above anything else. While i also read that using NFS as storage backend is "possible". Not sure if thats even a supported use case (or will be one, after release) tho.