I have a problem with really long nvme device names in ZFS pools causing systemd errors. The error is:
Jun 05 11:44:09 maverick systemd[22172]: zd0p3: Failed to generate unit name from device path: File name too long
A fix was committed to v250, but Proxmox is using v247. Current release of...
First, drive capacity is marketed using powers of 10 but operating systems measure storage using powers of 2. A 1TB drive will never format to 1TB usable space. As you are aware, RAIDZ1 provides usable capacity for N-1 drives. You also have ZFS overhead. Hence, (4-1) * 0.93 = ~2.79. Proxmox...
Good question. If you enable persistent cache at the Ceph.conf level it will by default apply it to all disks. I have subsequently disabled it on the TPM and EFI disks. It is now enabled only on VM data disks.
You don't passthrough to containers. You passthrough to a VM. For LXC containers, you need drivers on the host and the guest LXC containers. The container is given access to the resource via cgroups v2.
Live migration works fine. I haven’t had an unsafe shutdown to test but writeback cache is safe and there are commands to flush or invalidate cache in the event of such a crash.
Ceph is healthy. CephRBD is Replica 3 on 3x nodes with 7x Samsung sm863 OSD per node (21 total). WAL/DB is Optane 900p. RBD persistent write-back cache is also on Optane. This only happens on virtio block. It does not happen on virtio-scsi. I am using librbd because krbd does not support...
I receive the following error when starting a VM.
task started by HA resource agent
kvm: rbd request failed: cmd 0 offset 0 bytes 540672 flags 0 task.ret -2 (No such file or directory)
kvm: can't read block backend: No such file or directory
TASK ERROR: start failed: QEMU exited with code 1
I have a problem that just started with the updates pushed to the no-subscription repo overnight. QEMU won't start.
task started by HA resource agent
terminate called after throwing an instance of 'ceph::buffer::v15_2_0::end_of_buffer'
what(): End of buffer
TASK ERROR: start failed: QEMU...
Through extensive testing with Optane cache drives, I have been able to increase Queue 1, IO-Depth 1, 4K writes by over 4X using RBD Persistent Write Log Cache. However, krbd does not support pwl, only librbd. In addition, librbd allows tuning such as "rbd_read_from_replica_policy = localize"...
Without knowing if this node is also a Ceph MDS, Manager, and how many OSD’s it has it is impossible to say how much memory Ceph should be consuming. However, Ceph like all SDS takes heavy advantage of memory caching.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.