I can see that it's possible to download various parts of the PBS backup run at a specific time using the web interface.
Is it possible to upload a set of previously downloaded files to a PBS for them to be visible and used to restore a VM/LXC?
I have a few LXCs running on a PVE 6.4-14 server. Three of them render the Disk IO graph ok in the Summary tab but one of them shows no Disk IO at all in the past hour which is very odd as this server is quite busy. The only main difference between them is the LXC with the missing Disk IO has a...
I've tweaked the code somewhat to try to ascertain the sizes of the CTs too using XXD as in gerkos original example. I'm not sure I've got the logic quite right for CTs though when calculating the chunk sizes in the .didx files but perhaps someone would like to try this out and let me know if...
This link explains the file structures https://pbs.proxmox.com/docs/proxmox-backup.pdf#be and according to this documentation where the .fidx file has a fixed digest size (and presumably makes it simpler to parse through xxd), the .didx files do not, using an offset in between each digest part...
I've worked on gercos example above to make this (hopefully) more useful across multiple backups and specifically for a VM (it won't work with CTs yet).
it takes a single VM ID parameter at the start so for example:
./calcbackupsize.sh 111
The script grabs all the relevant .img.fidx files...
hmm, thanks for taking the time to explain.
I see that there is another thread too on the same subject https://forum.proxmox.com/threads/how-to-get-the-exactly-backup-size-in-proxmox-backup.93901/ as well as the thread you mentioned above.
I absolutely love the fact that Proxmox added better...
is that 4.96GB for that VM?
So perhaps I could take the figure from the "size" column in PBE of the oldest backup, then add on all the transferred data for the subsequent 6?
Is there a way to get the real value of how much disk space is used per server backup?
Currently I have 7 nightly rolling snapshot backups for my VMs and CTs but each one of them shows a very similar file size in PBS (understandably), generally growing slightly each night as more data is added...
So after a rebuild of the RAID array and a restart of Proxmox 7.1-7, I followed the instructions above to modify the lvm.conf file. After a reboot, all seems to be workimg ok.
Oddly I had a disk failure today and have swapped out one of the 4 disks in the RAID5 array, hence why i find myself on this thread as I have a similar error:
activating LV 'pve/data' failed: Activation of logical volume pve/data is prohibited while logical volume pve/data_tmeta is active...
You cannot share a dedicated GPU with more than one VM unfortunately. EDIT: that might not be strictly true; you may be able to allocate the GPU to more than one VM but only one VM can use the GPU at any time.
Thanks AVW. I swapped out the Ryzen 5 2600x for an AMD a10-9700 which as its own built in GPU and was able to then passthrough the 1080Ti.
The downside is that the A10 is a less capable CPU in terms of speed and cores, but as this is only a testbed its not critical.
Thanks AVW. So it is possible but not recommended for the reasons you mentioned?
The Mobo is a Gigabyte GA-A320M-DS2 and while it has 1 x PCIe x16 slot and 2 x PCIe x16 slots, the smaller slots are overlapped by the thick 1080Ti GPU.
I have a home lab setup using a relatively low specced Ryzen5 2600x (no integrated graphics). My Mobo has VGA and DV ports, but presumably these will only work if the CPU has a built-in GPU.
I also have a nvidia 1080ti mounted in a PCI-e slot which I'd like to use with PCI Passthrough, however...
Loving PBS but it would great to see the following added as standard so that it's more inline with PVE. I wondered if the below was on the roadmap for the future and if anyone else might find these useful:
Firewall management (like PVE)
Whilst I know I can use IPTABLES for the firewall element...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.