Do you use all the internal PCIe slots on your R620? You could add a small m.2 mirrored device there to install PVE. 12th gen don't have bifurcation but there is some limited NVMe support.
8-bay as opposed to 10-bay R620 also implies there is an optical bay? You could just tape up a small SATA...
Passthrough or not, a PERC card will bottleneck the individual drives.
8 NVMe drives at x4 each means you need 32 PCIe lanes to hook all these up to your system at their full width... but the PERC card only has x8.
To use the drives properly as a JBOD or ZFS, you would need to give the PERC...
It's just weird to get newcomers into a community who rather than try and learn our way of doing things, they want to force an unnatural marriage between 2 otherwise incompatible systems....yes we get you have invested tons of cash in hardware and then the rules were changed on you.
Still, it...
The "no parity raid for VM hosting" is a very old rule, I have been breaking it for over 10 years. If you have HDDs and a cacheless controller, don't break the rule.
If you have high performance SSDs and a 8 GB DDR4 write-back cache on your controller, there is no rule. The drives won't perform...
You will be absolutely fine staying on VMware for the remainder of this cycle, and plan better for the next. You might not get support and software updates, but they are not going to turn off your stuff.
Proxmox is already well known in the main enterprise space. When you talk to a Dell sales rep and wind up going with whatever host/hypervisor/SAN combo they recommend, that is more like SMB with a members only jacket thrown in as a gift. Not enterprise.
Cinder MAY eventually come from a paid...
Everyone knows what's going on with VMware, and everyone is looking for Proxmox to jump through hoops to make themselves a frictionless drop-in replacement for VMware, and that's just not how it's going to work in the beginning. I hope they get a ton more interest and investment though, but man...
Tiny Ceph clusters are going to be slow to begin with, and if you are power-constrained and trying to operate permanently on such a cluster, i would reconsider Ceph entirely. Replicated ZFS may be better for a 2-server solution.
Too many entry-level Ceph users are drawn to the idea of running...
A public RDP host can serve over 40,000 password guesses per day.
Even with a complex login, you should not be hosting ANY administrative services directly on the internet without 2FA, an IP ACL, or a tool like RDPGuard or fail2ban, etc.
Your guest VMs should also never have direct access to...
if you don't host VMs on it then you don't need as much CPU for just Ceph. but I think most people agree clusters are better when the nodes are all identical.
RDMA does not exist in Ceph. It is ethernet only.
There was an effort for it that was abandoned 6-7 years ago.
https://github.com/Mellanox/ceph/tree/luminous-12.1.0-rdma
after apt update; apt install pve-kernel-6.2 and rebooting, I have 2 test nodes (of different architecture) running this kernel:
Linux 6.2.16-4-bpo11-pve #1 SMP PREEMPT_DYNAMIC PVE 6.2.16-4~bpo11+1
and pinning the virtual CPU to "IvyBridge" and they are able to migrate VMs successfully back and...
We are introducing newer servers to our environment with Xeon Gold 6148 in combination with our E5-2697v2 hosts. So now we have a mix of Ivy Bridge-EP and Skylake-IP.
I expected the requirement was to configure the VM's CPU to "IvyBridge" to fix the CPU flags, instruction set extensions, etc to...
Thank you. the 17.2.6 dashboard definitely had some unfortunate stylesheet changes. I haven't been able to test OSD failures yet. Are you using NVMe?
I'm going to attach a screenshot of the configuration database from the PVE GUI. There are some parameters I do not remember setting...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.