Search results

  1. L

    [SOLVED] Not able to retrieve disk' aio information via API

    How did you change the Token permission? I've using pvesh to query the machines configs, but I feel like I'm fumbling around in the dark.
  2. L

    Use API to get storage location for VM's

    Thanks, that gives me a good idea on how to do this. However, it seems that that my pvesh is somehow deficient. When I do: ~# pvesh get config node/vm No 'get' handler defined for 'config' also, I can't enter just ~# pvesh ERROR: no command specified Although that should allow me to browse...
  3. L

    Remote PBS log shows error, but all processes look completed

    Can anyone see what causes this error? 2023-12-18T13:00:07+02:00: percentage done: 98.18% (54/55 groups) 2023-12-18T13:00:07+02:00: sync group vm/199 2023-12-18T13:00:07+02:00: re-sync snapshot vm/199/2023-11-20T08:36:28Z 2023-12-18T13:00:07+02:00: no data changes 2023-12-18T13:00:07+02:00...
  4. L

    Use API to get storage location for VM's

    Could you point in the right direction with the config of each VM? There doesn't seem to be a way to query that via the API, is there? Or did you mean I should use the config file for that VM and get it with bash and grep or something like that?
  5. L

    Use API to get storage location for VM's

    I need to extract which storage is assigned to each VM and LXC in our cluster. I can retrieve the total allocation for the boot disk, but can't see an obvious way to get the detail for each storage volume allocated. Some of our VM's have a boot disk on an ceph SSD pool and a logging disk on...
  6. L

    Strange disk behaviour

    Here's what my drives report: # nvme id-ns -H /dev/nvme0n1 | grep "Relative Performance" LBA Format 0 : Metadata Size: 0 bytes - Data Size: 512 bytes - Relative Performance: 0 Best (in use) I used that man page to create a special rbd volume for small writes to see if it improves the...
  7. L

    Strange disk behaviour

    If the problem only occures with ceph storage, then I would suspect that my ceph may not be able to handle it. But the intel-ssd is not a ceph volume and it happens there as much as it does on ceph storage. The poller writes many small files quite often. I'll forward some sample and a...
  8. L

    Strange disk behaviour

    I found rbd migration prepare. However # rbd migration prepare --object-size 4K --stripe-unit 64K --stripe-count 2 standard/vm-199-disk-0 standard/vm-199-disk-1 give me an error: 2023-11-22T13:04:18.177+0200 7fd9fe1244c0 -1 librbd::image::CreateRequest: validate_striping: stripe unit is not a...
  9. L

    Strange disk behaviour

    More than that, can I create a ceph rbd pool that has a 4096 block size as well, for this type of virtual machine? I don't see any parameter in the pool creation process that would allow me to set that. I do have this is my ceph.conf [osd] bluestore_min_alloc_size = 4096...
  10. L

    Strange disk behaviour

    We have done some more experiments with settings. If I increase the CPU's on this machine to 30, the problem of "D" state processes waiting for the disk practically goes away. However, while this may be a partial workaround, the problem is still that the CPU usage is way too high. The process...
  11. L

    Disk errors on FreeBSD 12.2 guest

    This problem has been re-occuring repeatedly... new thread here
  12. L

    Strange disk behaviour

    Yes, this has been an ongoing problem. When we moved the storage to non-ceph lvm-thin storage, the problem seemed to go away. However, after a couple of weeks it started re-occuring exactly in the same way that it was on ceph rbd storage. There's no specific time. We have had a completed...
  13. L

    Strange disk behaviour

    Reading the whole tread will be helpful. The nvme1 is part of a ceph pool, but the problem occurs regardless of which pool the VM uses, even if I move it to a local ext4 lvm-thin volume, the problem still occurs. Many other virtual machines are using that pool and they don't have any issues.
  14. L

    Strange disk behaviour

    I changed the thread name to better describe the issue.
  15. L

    Strange disk behaviour

    Ok, the user has started his machine again at just after 15:00. syslog has been attached. # pveversion -v proxmox-ve: 7.4-1 (running kernel: 5.15.108-1-pve) pve-manager: 7.4-16 (running version: 7.4-16/0f39f621) pve-kernel-5.15: 7.4-4 pve-kernel-5.13: 7.1-9 pve-kernel-5.3: 6.1-6...
  16. L

    Strange disk behaviour

    As to the size shown issue: Can we have this flagged as an inconsistency to be fixed in an upcoming version? Either the aim should be everything in GiB (prefered?) or else everything in GB. As to the host logs: I'll have to wait for it to happen again and will post it then
  17. L

    Strange disk behaviour

    The vm config shows 130GB allocation for the disk. sata0: speedy:vm-199-disk-0,discard=on,size=130G,ssd=1 The guest is FreeBSD, fsck has been run very often, that's not the issue. The problem is that the processes are running into a "D" state (waiting on disk). Eventually nothing runs on the...
  18. L

    Strange disk behaviour

    We're experiencing a problem with a FreeBSD KVM guest that works 100% on installation, but after a while starts complaining that it can't write to the disk anymore. What we have done so far: Moved the disk image off ceph to a lvm-thin volume Changed the disk from Virtio-SCSI to SATA and also...
  19. L

    [SOLVED] Ballooning memory: How to retrieve the max ram allowed from the guest OS?

    Found it, thanks to someone on Reddit: dmidecode --type memory | grep "Maximum Capacity"
  20. L

    [SOLVED] Ballooning memory: How to retrieve the max ram allowed from the guest OS?

    Scenario: Centos Guest OS with 8GB/24GB RAM as min/max allocated. The machine typically uses between 10GB and 12GB of the allowed RAM due to ballooning, but here's a problem: Using free -h shows only 14GB in total available. Can't find anything else that shows the 24GB max allowed. There are...