We run all our VMs with the QUEMU agent option "freeze-fs-on-backup=0" set. It's not ideal, but it's the lesser of 2 evils. Restoring a client backup (which is an incredibly rare event anyway) results in the same outcome as if the VM lost power and restarted at that point in time. Sure, it's...
I can't see why that'd impact on the pool size. You still have the raw space in the OSDs, even if they share the same physical device. You'll burn more RAM per node by doing this as each OSD uses 4GB of RAM by default. We didn't end up doing this. We just run 1 OSD per device and it's been...
Does the ifupdown2 shipping with PVE 8 support renaming interfaces ? This bit us a while back when we tested PBS, which replaced ifupdown with ifupdown2. We rename the interfaces to something meaningful to a sysadmin, which was ignored buy ifupdown2, so we lost all network connectivity.
After some trial and error I can achieve this using
proxmox-backup-client restore vm/999/2023-07-02T00:36:40Z drive-scsi0.img - | rbd import - vm-111-disk-0
It's not pretty so I hope there's some integration of ceph and pbs. Using the above I restored a backup of a disk image from a VM...
Hi
Is there a way to specify the "target" for a "proxmox-backup-client restore" that will write directly to a ceph cluster to create a new RBD image? I've hunted through the doco but can't find anything other than restoring to a file on a local filesystem.
Thanks
David
...
Hi
We're evaluating PBS and for our-use case I created 2 datastores on the same underlying filesystem. The reason for the 2 datastores is that some of the backups are to be sync'ed offsite, but not all of them. So, 1 datastore for replicated backups and the other datastore for local-only...
I'm not sure why you aren't understanding this. You can manually do whatever you want. Just don't use another orchestration tool like cephadm to install other features or add-ons as it may overwrite the configuration files generated by pveceph.
No, you are missing the point. Using cephadm to install any ceph add-ons "has the ability to break the existing ceph-environment". You cannot use 2 different orchestration tools for the same ceph cluster.
Oh, that's a shame. We've been waiting on that feature so we can look at moving to PBS from our own backup solution. We'll have a look at this on our lab cluster and wait for the UI to catch up with the vzdump change.
Hi all,
The release notes say that the option to disable the guest agent fsfreeze during a backup was included in 7.4 but I can't find it. Is that exposed in the UI? It doesn't show up on the VM Options page.
The option to not do a fs freeze has also been added to PVE (thank you Christoph Heiss). The patch has been "applied" but I don't know anything about release management at Proxmox so I don't know when we'll see it in the repo.
https://lists.proxmox.com/pipermail/pve-devel/2023-February/055653.html
Hi
Can we assume that everything is still working fine for you on 7.3.3?
Can anyone else here confirm that this problem is resolved when running 7.3.3?
No, I didn't resolve this with the dashboard. We ended up pulling metrics out of the ceph command line tools and ingesting them into our monitoring system so we can have reasonable visibility and alerting.
The "bad idea" isn't about enabling the dashboard, it's about using cephadm to enable...
And none of what you've quoted provides what we've all be asking for, for years. We want a way to manually trigger the bulk migration that the HA does on shutdown. And we want it not to migrate back until we tell it to, even if the node is rebooted. It's been a standard feature in vmware for...
Here's a feature request from about 18 months ago asking for this feature. Perhaps this can be attached to that?
https://forum.proxmox.com/threads/feature-request-maintenance-mode-and-or-drs.93235
Hi Thomas,
Unless something has changed, the current HA does not provide what we need. It migrates workloads when the node goes down and then brings them back when the node comes us. That's fine if the node crashes or something, but not if we want to do maintenance on the node. We need a...
Will this be used to introduce a "node maintenance mode"? We still really need a way to easily take a node out of production so we can work on it. At the moment we still use some homegrown scripts to migrate VMs around if we want to drain a node.
Ok, thanks @fiona. We were just experiencing the "Windows update never completes" issues. We'll have a look at upgrading pve-qemu-kvm and see how it goes.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.