I have a small Proxmox cluster comprised of 4 x Dell R640s, each with 1TB of RAM, and 4 X 4TB Dell NVMe P4510 Drives configured in 2 x mirrored vdevs for a total of almost 8TB on each node in the cluster. This local storage is called `ssdstorage` on each node.
I have set up HA and replication...
Dunuin - wow, thank you for that great explanation. That is very well thought out. I did not even consider the threat vector PVE to PBS, but it makes perfect sense that that could, in fact, happen should the PVE become compromised.
So the best way would be to create a user on PBS that has only...
Thanks, Dunuin; by Ransomware protection, are you referring to my PBS primary getting compromised? I have on system backups and snaps now, and I have my primary PBS doing routine backups all day (not a lot of VMs/CTs, 100 or less) for my case, keeping 5 last, 48 hourly, 13 daily, 24 weekly, 24...
As the title says, I have two PBS servers and one (the backup unit) pulls a sync from the primary. I have my primary doing pruning and garbage collections already, do I also need to do that on the backup server that is pulling the sync or will it automatically prune based on the sync?
I assume...
These are all brand-new DellEMC 4TB NVMe Enterprise P4510 drives installed in new servers. The nodes have 2 vdevs each, with each vdev being 2 x 4TB drives mirrored. The PBS has 8 of those same drives but in a RAIDZ2 configuration.
No GC or verification running at the time of these tests. There...
We have three Dell R640 servers; all NVMe-backed zfs storage, 1TB RAM each, 100G main interface, and 10G dedicated Corosync interface, all connected to Cisco Nexux 9k switches. Our PBS is also a Dell R640, 100% NVMe backed storage (RAIDZ2), 256GB RAM, 100G main interface only.
Running iperf3...
I am on 8.1.4 and this is still not fixed. I have notifications on my backup job set to Default (Auto) and 'On Failure Only' and I am still getting notified on every single successful backup.
vzdump 100 105 106 107 108 110 112 113 117 118 120 142 143 144 --storage ProxBackup01 --notes-template...
Thanks to everyone for their feedback. For years I have run a 6-node cluster on the two 10G connections, one trunked for all vlans, the other dedicated for Corosync and have never had an issue.
In reading the replies, I think I will stick with a dedicated 10G corosync connection!
Thanks...
So, as the title says, I am deploying all new Proxmox servers to replace our aging fleet of 2U Dells. Currently, I have a 10G trunk for all of my normal VLANs and a separate 10G connection specific to only Corosync VLAN traffic. My new servers have 4 x 10G NICs and 2 x 100G NICs each.
I was...
All my systems use only NVMe Drives (Intel DC4500 series) in a mirrored ZFS configuration. I also have a PBS as well so I am not backing up locally. Storage of ISO, etc is done via NFS. The servers don't break a sweat, I just want to downsize from 7 or 8 year old servers to something newer and...
This is great information, thank you both. I would really like to move to 1U servers to save space in my computer room so I think the 630/630 is inexpensive enough to give a try. I just don't know how powerful they will be compared to my 4 x CPU R820s. Do I need two of them to replace a single R820?
I was never able to figure this one out; recreating a CT and trying everything from scratch yielded the exact same results, regardless of what node I used to create the CT. The only thing I can think of is that I was running docker on it (which I do in a lot of CTs), but this one had a LOT of...
So we have a 6-node prox cluster and several stand-alone proxmox servers (all 8.0.4) with PBS at 3.0-3). PBS is available on all nodes in our cluster and our stand-alone servers.
Well, today, something weird happened. I ran a snapshot backup of a CT on one of our standalone nodes to our PBS...
OK, so I restarted the LXC container in question, it "crashed" the node about 10 minutes later. Again, I have access to the node via ssh, the VMs on the node are running, the LXC container is non-responsive, I can't console, ssh or otherwise access it. Here is the output of the commands you...
OK, I have not checked any of this yet, I am assuming that I should do so while the CT is in failed state. I will restart it and try this when it fails.
Thank You
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.