uhh... no.
It all depends on your dependence on HP's toolset (eg, ilo/ipmp, imc, infosite, etc.)
IF you use any of those, stick with HP. if you dont, or dont know what those are- sure, Supermicro can serve just as well.
you CAN do that, but its really not a good idea when RBD exists.
When using the pve api, that is the sequence that happens; @cph-cvm did you issue the snapshots from the proxmox interface? if so, post the task log.
maybe, but we dont actually have evidence of that. the errors INSIDE the guest arent indicative of that (necessarily.) look for issues with the drive on the host:
dmesg -T | grep nvme
If you see issues, then yes- you need to investigate your motherboard settings (perhaps pcie speed?) and...
This is a fairly common phenomenon. You're experiencing two issues, one as a consequence of the other.
if/when you saturate your nfs target its latency will necessarily increase. as it increases, it can and does lock up the proxmox metrics daemon (pve-statd) which causes the ui to turn into...
what does this mean? post the relevent command and output so there is no confusion in context.
always.
not necessarily. the larger your PGs are (eg, the lower the pg count) the less even the distribution. Also, not all pgs get equal use.
"Affordable" is a fuzzy term. Depending on what you're...
... which can be accomplished with just windows. thats my entire point. but you're the one designing and providing support- toolsets are at your disposal.
Yes :) you can do every item on your list with pve.
You dont need BOTH of those. if you have a windows guest on a vmware cluster, there really isnt a need to ALSO have Windows failover clustering. That said, you can do the same thing with PVE- I would simply point to the fact you can...
There are many homelabbers here. I'd be wary going by the advice/experience of users with a different usecase.
parity raid/EC coded storage pools are generally not well suited for virtualization workloads (wide stripe size/low queue count; virtualization wants small random access). I would...
It doesnt.
the better question is, why do you want to use proxmox for a windows cluster- just run the windows cluster... what are you trying to accomplish with the added virtualization layer?
Oh ffs not this again. I literally dont know why this doesn't just die the death it deserves.
Oracle is NOT an issue, since the zfs fork openzfs is based on is CDDL licensed. The "conflict" is using CDDL derived code within a GPL licensed kernel; Here is the ACTUAL license for openzfs you can...
I didn't feel comfortable deploying BTRFS for PVE until very recently; and when I say "comfortable" I mean for lab deployment.
Its been working well enough within its limited scope of use; subvolumes work correctly, snapshots work correctly, inline compression seem to work as well. I think its...
the loopback (lo) address is what you would use for both ceph and corosync. there is no bonding or any other options to configure.
A word of caution of this configuration- it is possible to create a denial of service condition under heavy load, since corosync and ceph will be sharing available...
Your H200 is only capable of RAID1, so switching to software raid doesnt really cost you anything performance wise. What you GAIN using zfs instead is all the stuff @jdancer mentioned. JUST the compression and snapshots make it preferred to hardware raid EVEN IF it was faster- which it wouldn't be.
These things are not the same, but you're kinda sol because this package conflicts with the pve-kernel package. Sorry mate :(
You can try to follow the instructions here: https://forum.proxmox.com/threads/getting-realtek-8125-drivers-working-with-proxmox.86991/
but they may or may not work...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.