Thanks that's great.
Any issues as far as updating? I'm assuming pve first, pbs 2nd (starting with second node first ofc)?
So you're backing up pve1 to pbs2 and pbs2 syncs data store to pbs1. are you running any vm/lxc on pve2 DR or it's just an 'empty' dr node? have you done pbs1 to pve1...
hi
i want to add second server that will be DR pve2 + pbs1 for the primary pve1. pbs1 will replicate to remote pbs2 as well.
my question is do i install pve first and pbs second ? or vice versa? which order is better(?) from upgrading standpoint. would running pbs lxc/vm be better on the DR...
hi
i know we can secure gui, ssh with 2fa but is it possible to 2fa console login?
also with many companies requiring encryption for data at rest how are you encrypting data?
easiest is to do it at the storage level but that requires manual interaction with with every reboot unless you...
i have main PBS1 and remote PBS2. on PBS2 i set up pull sync job from PBS1 and i see it pulls more than what exists on PBS1. how is that possible.
Main PBS1 is dedicated with local ZFS, remote PBS2 is virtual with TrueNAS NFS backend.
VM backup for one VM on PBS1 has count 7 (keeping 7 backups...
sorry to hijack the thread but on my pbs the zfs GUI created pool with /dev/xxx is there a way to convert it to /by/id ? i tried to export but it says the pool cannot be unmounted.
i was thinking more in terms of expanding storage in the future without shutting and handling everything on the os level but this is also a good point!
to make it easier and safer add new disk/s on same or different storage backend add those to the VM, boot with live iso (hbcd, ubcd etc) and clone everything to new drives while resizing the partitions. then you can delete old and boot from new drives. longer process but safer imho.
edit: only...
thanks. i was wondering if anything was added in native installer to make this possible. we try to keep our installs as vanilla as possible but this is definitely a good workaround.
not to hijack the thread but is proxmox cluster network still used for migration on shared storage? if corosync is on 1gb dedicated ring will migration on shared storage have huge performance impact? can migration network be switched to vm traffic network? or will that introduce performance...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.