@spirit: thanks for that info!
@wolfgang: in our case we have 4TB NVMe and additional nodes with just 1TB NVMe - so we wanted to create same size OSDs for not having to adjust any weights for distribution. Is the multiple OSD per NVMe a bad idea? Like the links from user "WSL" show, there seems...
the best practice to achieve multiple OSD per NVMe seems to be "ceph-volume" which creates LVM for bluestore etc. - would you guys recommend that? Or is there any plan for supporting multiple OSD from the PVE GUI? Thanks for answers
Hi,
we use nginx reverse proxy before proxmox, now in the daemon log there is always 127.0.0.1(localhost) as user. I need the real ip there for fail2ban to work properly. Have looked in the Code but couldnt manage to fix it on my own, it needs to look for the "X-Real-IP" in the header and use...
Hi,
we want to build a new shared-storage for a 4-node pve-cluster. Now we're unsure about best performance. I think RAID-10 is faster than RAID-50, but could ZFS(RAID-Z2) might be better?
Currently we use a hardware-RAID from adaptec with 14 HDDs à 3TB and 4x 250SSD in maxCache as RAID50...
Hello,
we use a 2-node proxmox setup with a couple of KVM guests. After putting some Guests on node1(Realtek NIC), we noticed a high performance drop when booting from the network with TFTP transfer of images. It takes about 20Minutes to get an 160MB Image over TFTP. Without the issue it takes...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.