Using the OLA (Which I believe might be similar for Scale... I wish I had a client willing to pay for a proxmox cluster with scale server) I use OpenVSwitch with the LACP bond to get a bridge setup.
That set, I'm only connecting the OLA/OpenVSwitch bridges to the VRack. On the Internet/Public...
Simple, you need to firstly understand IPTables, and then to also know the whole datacentre firewalling stuff (which I ignore like a bad turd as I do those on a FortiGate-VM) as you need to also allow the correct INCOMING traffic to make the return traffic complete the connections - especially...
Well, for the deduplication to work, you need at least a backup as close to the current as possible, but the way I "simulate" the above, is to do frequent synchronizations, but then do a pruning of all the old ones I don't want on the remote. Also PBS will NOT sync previous old backups (ie, your...
Using PBS in exactly this manner!
actually 2 ways in my case
1 the local backup PBS (having the historical long term backups too)
2a DR DC (same provider) syncing from the local shorter term pruning
2b other side of the world on a different provider syncing also from the local, even more pruning...
I'll add `|egrep -i '[a-z]'` to remove the non-text lines ;)
But yes, is an option, was just wondering about a less indented output of qm/pct given the "ascii art" I had to look at without the spaces in front
doing lots of snapshots (And yes, it would've shortened a recovery for me if there wasn't a problem with pmxcfs causing the snapshots to not happen over this past weekend), and a listsnapshot output starts to degrade after a couple of lines and then after some more totally goes un-bad-aligned...
Either I'm doing something less "designed" or I'm the only one needing/asking for it, but is there a simple way to have multiple pools, selected for backup in the same backup job? The problem I keep getting hit by, is that if I have multiple pools on the same hypervisor /PVE tryoing to do...
FYI commandline option I'll be using for NOW:
for i in vm/10119 ct/70011 ct/70012 vm/10301 vm/10300 vm/30406
do
proxmox-backup-client snapshot list $i | grep Z |\
awk '{print "proxmox-backup-client forget "$2}' | sh -x
done
<Obligatory don't do this at home warnings>
I've moved/copied backups between datastores for separating their retention/synchronisation rates, and now I need to delete quite a number of backups groups in the various datastores, and well... the GUI doesn't work wonders for my finger tips (touch...
Busy to get some documentation/preparations in place, and the question popped up: What is needed to recover a PBS server with intact datastores, but the rpool/root disk(s) failed got corrupted? (case in point a server with a single NVMe or two similar SSDs that fails together or other operator...
Remember that PBS makes use (and assumes relatime set on the the filesystem and that noatime is NOT set) of the ATIME to "touch" the used/referenced chunks during the GC cycle, and then, goes and find all the chunks that have a ATIME of more than a day + 5minutes and only gets removed after...
I had a similar "challenge" on an old (circa 2013) SuperMicro with EFI (before UEFI I believe...) where both the PVE 7.1 and the PBS 2.1 installation ISOs (downloaded as at Wednesday 12 Jan22) somehow installed the GRUB too, instead of sticking to EFI... I eventually did a debug installation...
just note that 4way RAID1 (ie, 4 mirrored copies of the same data) is different from a 4disk RAID10 (ie. a stripe of 2x 2disk mirrors)
The 4way RAID1 capable of surviving 3disk failures, while the 4 disk RAID10 is capable of surviving 2xdisk failures, just not the two making up a specific vdev...
your are using 6x units, perhaps consider dRAID, though the "magical" better number is rather 7+ which then gives you that option of a hot sparet o replace the failed unit immediately and do the replacement the next available maintenance slot.
I'd go then for a 5disk RAID-Z1 +1 host spare...
In the context of SSDs, all writes are "fragmented" by design by the storage controllers on/inside the NVMe/SSDs
In the context of ZFS it's the ZFS that was not able to find a contiguous block of size Y and had to split it in smaller chunks (ashift size) and spread that all over the storage...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.