At the time of my original post I was using the fresh 6.1 update. I have since moved my gaming PC over to another distro.
For reference I had been happily using pci-passthrough for months with no issue on proxmox up until the 6.1 update.
The rest of my cluster still uses proxmox and they have...
Preface: I don't really know where to look for error logs with proxmox other then the general /var/log.
With this latest update one of my nodes completely froze and locked up while using KVM with pci passthrough. Nvidia was in the process of encoding a video using NVENC when this happened...
I can't move any of the encrypted OSD between nodes. Is this because of the encryption or am I doing something wrong?
Ceph will not pickup the drives in other nodes.
Okay? I'm responding to what you said. You said server raid controllers do not have HBA and that's simply not true.
I understand English is not your native language so what you said implied all server raid controllers don't allow HBA passthrough. What you meant was that your server raid...
Set compression_mode to force and I'm almost instantly noticing compressed items increase under ceph df detail.
I guess aggressive isn't enough with ceph. Wish their documentation was a bit more complete in regards to compression.
The problem is I've removed most of the primary_data and rewritten it and the compression fell to 400MB total.
I'm not convinced it's working correctly. I'll keep fiddling with the settings and seeing where that goes.
Yes they do. The raid controller (LSI) built directly into my supermicro motherboards support HBA and my physical LSI raid cards support HBA. You may need to flash the firmware but usually the raid card manufacture has that available on their website.
Connect the four drives to your...
Can you update the kernel to 5.54 for the updated ceph drivers?
If not will proxmox crap itself if I add my own compiled kernel, provided the right options are set?
Thank you.
Ceph does not appear to be compressing my files.
root@node01:/mnt/primary# ceph df detail
RAW STORAGE:
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 25 TiB 22 TiB 3.0 TiB 3.0 TiB 11.82
TOTAL 25 TiB 22 TiB 3.0 TiB...
I worded my OP poorly. What I meant to say was that ifdownup2 was fully functional and worked fine a week ago. It's only on new installs this is happening (I haven't upgraded the cluster, though, so maybe it's bugged on the upgrades as well).
I have verified this issue on multiple nodes with fresh installs. All nodes were installed with debian buster for the LUKS encryption. Followed the guide exactly.
This was not happening two weeks ago on older installs (same hardware) so I'm at a loss here.
Images of the issue. Ignore the dirty...
I'm not home right now but I've used proxmox before without this issue, even while installing on top of buster.
I've verified this issue on multiple nodes and ifupdown2 is installed.
Steps to reproduce:
Install buster. Install proxmox on top. Install opensm/rdma-core and follow the how to for...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.