Hello,
I can see the HTTP 200 "datastore" request to the API :
The datastore is there just fine, but nothing in the GUI.
No error either in the console or in /var/log/api/access.log
Hey all,
I'm trying to clone some backups to a new datastore the "good" way (using local sync instead of copying files and forgetting to chown them). However, I can't seem to select the source datastore :
The source datastore menu is empty, and I can't select anything.
Permission wise, my...
Alright, looks like a layer 8 issue.
Basically, I had copied the VM backups from the root namespace into another namespace to restore said VM into a different environment. However, I had failed to chown the folder to my backup user, resulting in this :
(I'm putting the blame on the post...
Not the same one as before but the same group.
2024-01-25T00:24:57+01:00: percentage done: 77.98% (36/47 groups, 48/74 snapshots in group #37)
2024-01-25T00:24:57+01:00: verify hello-pbs2:vm/9020/2023-11-24T22:50:23Z
2024-01-25T00:24:57+01:00: check qemu-server.conf.blob...
Hey,
Nothing in the journal I can spot :
root@heyhey-PBS2:~# journalctl --since "2024-01-25 00:20:00" --until "2024-01-25 00:30:00"
Jan 25 00:21:43 heyhey-PBS2 proxmox-backup-[170817]: heyhey-PBS2 proxmox-backup-proxy[170817]: write rrd data back to disk
Jan 25 00:21:43 heyhey-PBS2...
I'm the only one accessing this PBS, so that shouldn't be an issue.
Any way to filter out journalctl? Its spammed by running backups every few minutes.
Hey all,
I'm seeing a weird behaviour on one of my PBS node. The GUI verify job which should run every day has been failing for a few days now. Upon further investigation, the verify job is trying to verify a non-existent (probably pruned) backup : (datastore name has been censored but isn't...
To be fair the main bottleneck in our current setup is definitely the PBS. If, one day, the main bottleneck is the network, I'll be a happy man.
Dell public prices are dumb. They get better if you have a business relationship with them, but they still get you on the ram and storage.
Reckon we...
We do indeed have dark fibers between each DC. Its a 10G loop except for DC4 which has 500M.
I believe there's a misunderstanding here. This is my fault, and I haven't precise enough. The backup size I stated earlier is on disk size. Nightly volume is between 100G to 200G.
This doesn't make...
I'd very much like help on this, as my experience with ZFS never left the homelab.
I agree with you that hardware raid is bad. Furthermore, I'd also like to use ZFS, but I need a solid understanding of it first, so I can be sure that when a drive eventually dies, either me or someone else with...
Alright, thanks for your input. I'm going to reassess and consider using "tiering" for my PBSs : SSD only "Primary" PBSs for short & medium term backups, and RAID6 HDD only PBSs for long-term storage. This might not be the most optimal way of doing it, but it sure is going to be much better than...
I currently have 30Tb of dedup'ed backups on the running PBSs with dedup factors between 60 and 110 depending on the PBS. Considering a worst-case data growth of 20% each year we'd be reaching 76Tb of backups in 5 years. While 100Tb might be oversized, I do not think it is oversized by a lot.
I...
Hey all,
Continuing on my journey to create a somewhat good Proxmox Backup Server infrastructure for my company, I'm now considering different hard drives configuration, but am a total noob when it comes to high (for us) quantity storage.
As of now, the idea would be to have around 100Tb of...
Hi,
Thanks for your insight, datastore wise this is a good idea, and I'll follow up with it.
Sync wise, I might be limited by disk space, but I agree with you that syncing to multiple PBS would be better.
Hey all,
I've got the opportunity to upgrade the Proxmox backup servers running in our production environment.
Currently, I have the PBS set up as follows :
DC A backups to the PBS on DC B
DC B backups to the PBS on DC A
DC C backups to the PBS on DC A
I am fully aware that this is not...
Bonjour,
As you've stated, the linux kernel is the bottleneck, and solutions to this include XDP and DPDK. (I've also dabbled with VPP, though I must say the documentation is harder to understand for me).
This hasn't been a priority for us, but I'm sure 10G routing on virtual machines will be...
Hey there.
I marked it as solved because I didn't need help with my issue any more. Sorry if that was misleading.
I'd recommend opening a new thread with your specific use case, as it seems to differ from mine.
Best Regards
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.