If you're using EC (6,2), that's about 75% storage efficiency. If you are using replicated rule with the same eight nodes, and assuming each node contributes 1 OSD, then you have only about 12.5% storage efficiency.
12.5% < 75% no?
Can't.
Said...
just wanted to say thanks. I did install the latest 6.19.1 on my proxmox runnnig amd strix halo (framework desktop), and that kernel actually fixed the problem in my immich lxc container running onnx models. Before i was running 6.17 from proxmox...
"lower" and "higher" are subjective. Ceph achieves HA using raw capacity.
suit yourself. this is not a recommended deployment. You are far better served by just having two SEPERATE VMs each serving all those functions without any ceph at all-...
But doesn't replication yield lower storage efficiency?
I am currently using EC (2,1) for my "simple" three node Proxmox HA cluster which serves my DNS, Windows AD DC, and AdGuardHome and where the LXC/VM disks reside on the distributed EC Ceph...
OK, OK.. talking to myself again it seems, well should anybody be future searching maybe they'll find this post helpful (hopefully)
Here's a quick run down of my testing..
FIRST and I know this is a slight pain, but check your downloaded...
if you have talking about the vxlan tunnels themself or the bgp peers, they are simply using the route to reach to remote peers ips.
so you can make simple routes on your host if needed.
or do you want PBR specifically for the vxlan udp port...
AMD microcode is installed.
I guess I can find a keyboard/monitor to go poke at the BIOS, but if those things are set wrong, why doesn't 6.14.8-2 have a problem with it?
Found something else while googling, someone having similar issues in...
The stuff about setting a slightly higher voltage and/or lower frequency in BIOS seems relevant though. It might also pay to look at what c-states are enabled and whether you have the AMD microcode installed.
Other than that I got nuthin'.
I didn't find that much, but I did discover that that message is cut off. Should be "software wrote 0xE to reset control register 0xCF9"
When you google that, yes, it starts to get more interesting, but most of what I'm finding so far is about...
Hi,
This is a weird one. I have a Minisforum MS-A2, Ryzen 9955HX, 128GB of RAM, a Samsung SSD. Running up to kernels 6.14.8-2, it is rock solid. So I don't think it's a hardware issue...
Newer kernels, certainly including all the 6.17s I've...
I just came here to post about this too.
My home lab Minisforum MS-A2 is rock solid under 6.14.8-2-pve, anything newer that I've tried, spontaneous reboots. A power issue caused the machine to boot back up in 6.17.9-1-pve, and... boom, less than...
I gave you one possible answer, but you choose to ignore it.
As a funny coincidence you answered one of my questions by posting this picture; your volblocksize is 16k.
So my theory was right.
So again, every 1TB VM disk will not only use 1TB...
I'd recommed setting up scheduled trim instead.
Please share
zfs list -ospace,refreservation -rS used
qm config VMIDHEREOFAVMWITHLOTSOFUSEDDATA
Also read this about how to properly use fstrim/discard. Pretty sure a zpool trim does not affect...
You would have to provide a lot more information than what you posted here. Otherwise we have to make educated guesses ;)
So just cluster, no HA?
So you use ZFS and VMs use local RAW drives on ZFS?
There are a few problems with that.
Short:
A...
Why would anyone want to publish their Proxmox admin web console over the internet.?
Surely the wiser approach would be to route that traffic via some sort of VPN, that is unless you are planning on providing access to the public.
RAIDZ is the common short version of RAIDZ1, so single parity. There is no striping with just two disks available via WebGUI, because if one of the disks fails, all data would be lost. If you insist on using RAID-0 for data, it can be configured...