Here's what my drives report:
# nvme id-ns -H /dev/nvme0n1 | grep "Relative Performance"
LBA Format 0 : Metadata Size: 0 bytes - Data Size: 512 bytes - Relative Performance: 0 Best (in use)
I used that man page to create a special rbd volume for small writes to see if it improves the...
If the problem only occures with ceph storage, then I would suspect that my ceph may not be able to handle it. But the intel-ssd is not a ceph volume and it happens there as much as it does on ceph storage.
The poller writes many small files quite often. I'll forward some sample and a...
I found rbd migration prepare. However
# rbd migration prepare --object-size 4K --stripe-unit 64K --stripe-count 2 standard/vm-199-disk-0 standard/vm-199-disk-1
give me an error:
2023-11-22T13:04:18.177+0200 7fd9fe1244c0 -1 librbd::image::CreateRequest: validate_striping: stripe unit is not a...
More than that, can I create a ceph rbd pool that has a 4096 block size as well, for this type of virtual machine? I don't see any parameter in the pool creation process that would allow me to set that.
I do have this is my ceph.conf
[osd]
bluestore_min_alloc_size = 4096...
We have done some more experiments with settings. If I increase the CPU's on this machine to 30, the problem of "D" state processes waiting for the disk practically goes away. However, while this may be a partial workaround, the problem is still that the CPU usage is way too high.
The process...
Yes, this has been an ongoing problem. When we moved the storage to non-ceph lvm-thin storage, the problem seemed to go away. However, after a couple of weeks it started re-occuring exactly in the same way that it was on ceph rbd storage.
There's no specific time. We have had a completed...
Reading the whole tread will be helpful. The nvme1 is part of a ceph pool, but the problem occurs regardless of which pool the VM uses, even if I move it to a local ext4 lvm-thin volume, the problem still occurs. Many other virtual machines are using that pool and they don't have any issues.
Ok, the user has started his machine again at just after 15:00.
syslog has been attached.
# pveversion -v
proxmox-ve: 7.4-1 (running kernel: 5.15.108-1-pve)
pve-manager: 7.4-16 (running version: 7.4-16/0f39f621)
pve-kernel-5.15: 7.4-4
pve-kernel-5.13: 7.1-9
pve-kernel-5.3: 6.1-6...
As to the size shown issue: Can we have this flagged as an inconsistency to be fixed in an upcoming version? Either the aim should be everything in GiB (prefered?) or else everything in GB.
As to the host logs: I'll have to wait for it to happen again and will post it then
The vm config shows 130GB allocation for the disk.
sata0: speedy:vm-199-disk-0,discard=on,size=130G,ssd=1
The guest is FreeBSD, fsck has been run very often, that's not the issue.
The problem is that the processes are running into a "D" state (waiting on disk). Eventually nothing runs on the...
We're experiencing a problem with a FreeBSD KVM guest that works 100% on installation, but after a while starts complaining that it can't write to the disk anymore. What we have done so far:
Moved the disk image off ceph to a lvm-thin volume
Changed the disk from Virtio-SCSI to SATA and also...
Scenario: Centos Guest OS with 8GB/24GB RAM as min/max allocated. The machine typically uses between 10GB and 12GB of the allowed RAM due to ballooning, but here's a problem: Using free -h shows only 14GB in total available. Can't find anything else that shows the 24GB max allowed.
There are...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.