as soon as i updated to 6.8.4-2-pve i got these messages and starting backup kills the server few mins in
reverting back to 6.5.13-5-pve brings everything back to normal
y
Thanks that's great.
Any issues as far as updating? I'm assuming pve first, pbs 2nd (starting with second node first ofc)?
So you're backing up pve1 to pbs2 and pbs2 syncs data store to pbs1. are you running any vm/lxc on pve2 DR or it's just an 'empty' dr node? have you done pbs1 to pve1...
hi
i want to add second server that will be DR pve2 + pbs1 for the primary pve1. pbs1 will replicate to remote pbs2 as well.
my question is do i install pve first and pbs second ? or vice versa? which order is better(?) from upgrading standpoint. would running pbs lxc/vm be better on the DR...
hi
i know we can secure gui, ssh with 2fa but is it possible to 2fa console login?
also with many companies requiring encryption for data at rest how are you encrypting data?
easiest is to do it at the storage level but that requires manual interaction with with every reboot unless you...
i have main PBS1 and remote PBS2. on PBS2 i set up pull sync job from PBS1 and i see it pulls more than what exists on PBS1. how is that possible.
Main PBS1 is dedicated with local ZFS, remote PBS2 is virtual with TrueNAS NFS backend.
VM backup for one VM on PBS1 has count 7 (keeping 7 backups...
sorry to hijack the thread but on my pbs the zfs GUI created pool with /dev/xxx is there a way to convert it to /by/id ? i tried to export but it says the pool cannot be unmounted.
i was thinking more in terms of expanding storage in the future without shutting and handling everything on the os level but this is also a good point!
to make it easier and safer add new disk/s on same or different storage backend add those to the VM, boot with live iso (hbcd, ubcd etc) and clone everything to new drives while resizing the partitions. then you can delete old and boot from new drives. longer process but safer imho.
edit: only...
thanks. i was wondering if anything was added in native installer to make this possible. we try to keep our installs as vanilla as possible but this is definitely a good workaround.
not to hijack the thread but is proxmox cluster network still used for migration on shared storage? if corosync is on 1gb dedicated ring will migration on shared storage have huge performance impact? can migration network be switched to vm traffic network? or will that introduce performance...
we're on standard which becomes quite a job to make sure we're compliant so we will definitely invest in datacenter once our workload goes up. thanks for the tip on the ms licensing but it always makes my head spin lol.
we'd probably be fine with 3 nodes too but the hardware is on the older...
once again excellent piece of information. those are some beefy servers you got there. i'm trying to re purpose 5 older dual e5-26xx R430 256GB RAM and hopefully they'll do the trick as our workload isn't too crazy (around 20 VMs 99% win based) and we hope to have some room in case we double it...
thank you for taking your time to run those tests and providing a very detailed answer. it's great information.
i meant specs of the node hardware, CPU/RAM. Sorry for not making that clear. Also is your 10gb mesh used for both private and public networks or are you splitting them. i'm thinking...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.