TLDR; This is an excellent go-through the requirements of corosync-tuning for larger clusters by @fweber (which is in fact part of linked thread by @bbgeek17)
A status update on this:
Two corosync parameters that are especially relevant for larger clusters are the "token timeout" and the "consensus timeout". When a node goes offline, corosync (or rather the totem protocol it implements) will need to...
Hi Stoiko,
thank you very much. Im sure we will use the basic subscription on the 2nd node too when the community subscription ran out. For now it was important for me, that technically there is no problem with mixing Community and Basic in one...
Hi @alexx,
there was a similar discussion recently here: https://forum.proxmox.com/threads/proxmox-with-48-nodes.174684
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
I'm not sure about that.
But I read the docs[0] again, and there's some additions that need in PVE 9.
I will check out.
SOLVED after change the line in the interfaces file:
Before:
vmbr0
...
...
post-up /usr/bin/systemctl restart...
LVM is a PVE integrated way to use FC as shared storage. You can read this article to get high level understanding of the components involved:
https://kb.blockbridge.com/technote/proxmox-lvm-shared-storage/
Although it references iSCSI as...
Hello everyone,
I have a question about the SDN stack in Proxmox. Currently, traffic in the EVPN/VXLAN networks breaks out via the host interface that has the default route. Is there an officially supported way to change or define which...
Hi... Do I need to something with fabric when upgrade from Proxmox 8 to Proxmox 9.
Everything was fine, but after upgrade to PVE 9, the post-up systemctl restart frr.services doesn't work inside /etc/network/interfaces.
@fstrankowski I'm fully aware about the risks of OSD being full and know how to deal with that, but in any case an OSD should break because of that ;)
Definitely fragmentation has an impact on this and will watch it more closely from now on...
Best of luck to you * fingers crossed *. I had to rebuild the whole cluster in my clients case and fix ceph by manually restoring placement groups - which was a pain.
This will only fix your problem in the short term. Fragmentation will come back relativly quick. You better add more OSDs or wipe some data off your pools :-)
@fstrankowski I'm fully aware about the risks of OSD being full and know how to deal with that, but in any case an OSD should break because of that ;)
Definitely fragmentation has an impact on this and will watch it more closely from now on...
Initially i'd like to raise concerns about the amount of available storage already beeing in use. By default CEPH doesnt allow more then 80% so you'd have to take precautions really soon while taking these concerns into consideration.
I'd highly...
Your published DKIM record (pmg._domainkey.dti.uncoma.edu.ar) is valid. What is your actual issue when you do the lookup? The DNS record itself should work.
dig txt pmg._domainkey.dti.uncoma.edu.ar +short
" v=DKIM1; h=sha256; k=rsa...
Fleecing ist nur ein Cache damit du nicht auf den PBS warten musst.
Die Backups laufen immer so, dass wenn während eines Snapshot Backups die VM schreiben möchte, muss der Block vorher gelesen werden und auf dem PBS geschrieben werden, bevor der...