Backup of one specific LXC mountpoint?

Apollon77

Well-Known Member
Sep 24, 2018
153
13
58
47
Hi,

it can easiely be configured if, beside the rootfs, also other mounted fs will be store in the backup.

I now for example have a redis container where the data directory /var/lib/redis is in an own mountpoint. Now backup wise my main use case would be to strore the "rootfs" only sometimes, but the "data fs" more often. Is there any chance to do this such selectively?

Ingo
 
not really, as the rootfs is always included.
 
:-( Then this would be a feature request from my side :)
I also have one VM with a very big Sentry installation inside which runs in docker and here the "data directories" are only a very small part of the disk usage. Having always to backup 160GB instead of 1GB or such is a big difference :-)
 
yeah, you'd need a '--volumes' option to vzdump that specifies which disks/mps are backed up, irrespective of the backup= option set in the guest config.
 
that problem with such a rootfs-less backup is that it's not restorable as a container, so you might as well just run tar or proxmox-backup-client in the container to backup the data..
 
yeah, you'd need a '--volumes' option to vzdump that specifies which disks/mps are backed up, irrespective of the backup= option set in the guest config.
Exactly ... or "--device" (pct fsck" has this ...)
 
that problem with such a rootfs-less backup is that it's not restorable as a container, so you might as well just run tar or proxmox-backup-client in the container to backup the data..
Hm... maybe this is really the idea ... thank you! Will try after vacation and give feedback
 
Last edited:
@fabian Hm ... not that easy as it seems :-(

So I added the apt repo into my container (ubuntu bionic based), then I worked around the fact that the repo is not signed.
But after that I end up with

Code:
The following packages have unmet dependencies:
 proxmox-backup-client : Depends: libfuse3-3 (>= 3.2.3) but it is not installable

libfuse3 does not seem to be available for bionic ... only focal could have it (20.04) :-(
 
Hm ... and it is getting not better :-(

I now updated one container to Focal (Ubuntu 20.04). Result:

Code:
The following packages have unmet dependencies:
proxmox-backup-client : Depends: libapt-pkg5.0 (>= 0.8.0) but it is not installable

... I think the same as reported in https://forum.proxmox.com/threads/b...-install-from-repo-on-ubuntu-20-04-uts.74633/

So what now? Any chance to get the backup client running on Ubuntu 18.04 and/or 20.04?!
 
it will probably get better once we split -client and -server properly (e.g., the systemd and apt deps are for the server only, but that is not handled correctly atm). for fuse we might need to provide both if possible, or make that part optional. for now, building just the backup client binary with cargo (e.g., on a PBS/PVE system with the devel repo) and copying that should work in many cases, similar to what AUR does.
 
yes and no ;) the current combined repo/crate allows for faster iteration, which is nice for the current phase of development..
 
Ok, Then I need to wait it seems ... many users are using Ubuntu like me and I think it is also valuable test feedback on how to use the backup client "itself" in such cases ...
 
like I said - the easiest way to do that now is probably to just build the (mostly statically linked) client binary with cargo, like AUR does
 
Ok, @fabian it cost me the whole night and even after many tries and such I end up with one last error (rustc 1.43.0):

Code:
error[E0658]: use of unstable library feature 'str_strip': newly added
   --> src/server/worker_task.rs:232:66
    |
232 |             if let Some(rest) = iter.next().and_then(|rest| rest.strip_prefix("TASK ")) {
    |                                                                  ^^^^^^^^^^^^
    |
    = note: see issue #67302 <https://github.com/rust-lang/rust/issues/67302> for more information

error: aborting due to previous error

For more information about this error, try `rustc --explain E0658`.
error: could not compile `proxmox-backup`.

So do I need a nightly of rustc or how to get around that?

EDIT: stabilized in 1.45.0 ... but not really a package available ... :-(

EDIT 2: ok uninstalled and reinstalled rustc .... now error is

Code:
error: reached the type-length limit while instantiating `<std::boxed::Box<std::future::fr...>, ()}]>, ()}]>, ()}]>>>>>::into`
    |
    = note: consider adding a `#![type_length_limit="1078687"]` attribute to your crate

error: aborting due to previous error

error: could not compile `proxmox-backup`.

EDIT 3: Ok, seems to be a bug in rust 1.46 ... so back ro 1.45.2 ... lets see

EDIT 4: Ok, after also manually compiling fuse3 and other stuff I now got it to compile ... now I test on a second system
 
Last edited:
  • Like
Reactions: fabian
Bad luck:

./proxmox-backup-client
./proxmox-backup-client: error while loading shared libraries: libfuse3.so.3: cannot open shared object file: No such file or directory

Even in the machine where I did the build. so means I need to manually somehow really install fuse3 also for the client ...

... ok on a Ubuntu 20.04 with installing libfuse3 it seems to work
 
  • Like
Reactions: fabian
@fabian Yes it is working but honestly ... I have in the end no idea if I got all correct code together :-) Only one repo uses tags for versions, the other repos I needed to clone only have master ... so ... Yes I have a working version ...

But please have on your roadmap to offer a "as less deps as needed" client only package that also works on Ubuntu :-)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!