I see, thank you.
So for my case, having four bays, it would be prudent to have two contain disks in a pool together (allowing for redundancy in case of failure), and the other two set up with pools to export to. Awesome, thank you!
I was a bit confused in regards to why three disks in RAIDZ-1 cannot rebuild after two losses, so I did some more research. Please correct me if I'm still wrong, but from what I understand now "RAIDZ-1" is not the same as a "ZFS mirror." My drives are set up in a mirror, as it shows when setting...
My server has four hot-swap bays, and I have six drives for them. The idea is to have one stay as a hot-spare, one stay as an online backup, and two be exchanged weekly for onsite and off site cold storage offline backups. I'm trying to set it up to be as automated as possible, ideally being...
By the lack of responses I'm guessing this is an eyeroller and I probably am fine. So I have a question: How can I prevent the Hot Spare from coming online if I manually remove a drive? I only want it to engage when there is an actual failure, not when I'm swapping out a drive for cold storage.
Hello Proxmox,
I have a server running PBS on two NVME in RAIDZ-1 and four enterprise HDD's in hotswap bays for storing the backups. I want HDD 1-3 in RAIDZ-1 with the ability to hotswap for cold storage, and HDD 4 as a hotspare in case of a failure. To set this up, I have taken the following...
Hello Proxmoxians,
I'm trying to create a new VM on one of our nodes but I'm getting an error when clicking "Finish."
Parameter verification failed. (400)
memory: value must have a minimum value of 16
Searching the forums and the web didn't give me anything useful, so I've resorted to...
Hello all,
I have four nodes backing up to PBS on a regular basis. One of the nodes has five VM's, only one of which is backing up. The other four fail. All other VM's on all other nodes back up without issue. I really have no idea what could be causing it, but it's important that I fix this...
Hello,
I am trying to set up our PVE node on the network and am having an issue I've been thus far unsuccessful in resolving.
The node has two NIC's with multiple ports (1-4, and 5-8), which I've set up for LACP resulting in two bonds (bond0, and bond1). The goal is that bond1 (ports 5-8 / the...
Well that makes a lot of sense. Our network unfortunately can't support those requirements, so we will have to do without CEPH and HA. Thank you very much!
Hello all,
[Background] We recently set up our first Proxmox cluster with four new HP ProLiant DL360 Gen9 servers and a HP-2530-48G (J9775A) switch. The servers' NICs configured to form LACP bonds to the switch, and we have set up CEPH and HA in Proxmox.
[Issue 1] We were getting random...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.