The problem is that you have multiple NICs as bridge-ports. If they're all connected to the same switch you are creating a loop. If you want to use multiple NICs that way, you will need to create a bond on top of those 4 NICs and then use the...
Grüße allerseits, ich benötige eure Hilfe bei einem Problem, welches ich nicht so recht nachvollziehen kann:
Ich betreibe einen privaten Server mit Proxmox Virtual Environment 9.4.1.
Auf diesem laufen neben anderen:
eine virtuelle Maschine für...
Hello,
I have a problem which I hope I will be able to describe. It's quite complex in my opinion, or I make it too complex for myself. Hard to say.
Also, I am not entirely sure whether this is a correct place to post it.
If not, take my post...
Well... if you still have some spare time, just try to power OFF VM, migrate all disks on NFS 4.2 datastore, power it back ON. Create a file, delete it, run 'fstrim -av' and check qemu-img info.
Space reclamation works every time without...
At https://packages.debian.org/forky/amd64/zfsutils-linux/filelist I can see there are /usr/bin/zarcstat and /usr/bin/zarcsummary
Maybe those were renamed in the newer version?
What about man zarcstat ?
P.S.
Indeed...
Hi!
I'm looking for something similar for the new small setup:
- "3-node-storage" with separate "3-node-hypervisor" ( 6 servers in total ).
My searching found the followings for storage-nodes replication/HA:
- MooseFS - https://moosefs.com
-...
@garfield2008 ,
Gut, damit ist die Ursache bestätigt.
Was sich geändert hat: Vermutlich haben bei der Neuinstallation/dem Update die OVS-Bridges oder Bonds die MTU 9000 von den NICs übernommen (OVS negotiiert die MTU automatisch anhand der...
Hello Daniel,
thanks for your reply!
Yes, the IPfire has also a lot of logs (most are "firewall" related, but also kernel and other logs are included) and I checked it already, but I can't find any hint about the reason of failure.
Currently...
Congratulations, great behavior for a technical discussion forum! :)
You simply didn't even read what I wrote in the entire thread, not even the very clear bolded parts.
If you are not able to deal with a technical dialogue by discussing topics...
Although they were about replicated pools (so no ec) following reads might serve as a hint why (outside of experiments/lab setups) it's not a good idea to go against the recommendations...
I also have the same error here under kernel 6.17. But the strange thing is that it does not happen under ZFS. I also tested other file systems: Ext4, LVM, XFS, BTRFS. The controller crashes with all of them.
If you are not using ZFS, the only...
Although they were about replicated pools (so no ec) following reads might serve as a hint why (outside of experiments/lab setups) it's not a good idea to go against the recommendations...
With size=min_size you cannot lose any OSDs without losing write access to the affected objects.
And it has nothing to do with number of nodes or number of OSDs.
This is not recommended and certainly not HA. With m=1 you cannot loose a single disk.
An erasure coded pool should have size=k+m and min_size=k+1 settings which would be size=3 and min_size=3 in your case.
No no no. You got your math wrong.
To achieve the same availability as EC with k=6 and m=2 you need triple replication (three copies) meaning a storage efficiency of 33%. It is rarely necessary to go beyond 4 copies.
With size=min_size you cannot lose any OSDs without losing write access to the affected objects.
And it has nothing to do with number of nodes or number of OSDs.
But I've got that with my three nodes, no?
k = 2
m = 1
size=2+1 = 3 (which is what I have)
min_size = k + 1 = 2 + 1 = 3 (which I have three nodes).
So, I am struggling a little bit, in trying to understanding how size = min_size = 3 in my...