Search results

  1. L

    Use API to get storage location for VM's

    Could you point in the right direction with the config of each VM? There doesn't seem to be a way to query that via the API, is there? Or did you mean I should use the config file for that VM and get it with bash and grep or something like that?
  2. L

    Use API to get storage location for VM's

    I need to extract which storage is assigned to each VM and LXC in our cluster. I can retrieve the total allocation for the boot disk, but can't see an obvious way to get the detail for each storage volume allocated. Some of our VM's have a boot disk on an ceph SSD pool and a logging disk on...
  3. L

    Strange disk behaviour

    Here's what my drives report: # nvme id-ns -H /dev/nvme0n1 | grep "Relative Performance" LBA Format 0 : Metadata Size: 0 bytes - Data Size: 512 bytes - Relative Performance: 0 Best (in use) I used that man page to create a special rbd volume for small writes to see if it improves the...
  4. L

    Strange disk behaviour

    If the problem only occures with ceph storage, then I would suspect that my ceph may not be able to handle it. But the intel-ssd is not a ceph volume and it happens there as much as it does on ceph storage. The poller writes many small files quite often. I'll forward some sample and a...
  5. L

    Strange disk behaviour

    I found rbd migration prepare. However # rbd migration prepare --object-size 4K --stripe-unit 64K --stripe-count 2 standard/vm-199-disk-0 standard/vm-199-disk-1 give me an error: 2023-11-22T13:04:18.177+0200 7fd9fe1244c0 -1 librbd::image::CreateRequest: validate_striping: stripe unit is not a...
  6. L

    Strange disk behaviour

    More than that, can I create a ceph rbd pool that has a 4096 block size as well, for this type of virtual machine? I don't see any parameter in the pool creation process that would allow me to set that. I do have this is my ceph.conf [osd] bluestore_min_alloc_size = 4096...
  7. L

    Strange disk behaviour

    We have done some more experiments with settings. If I increase the CPU's on this machine to 30, the problem of "D" state processes waiting for the disk practically goes away. However, while this may be a partial workaround, the problem is still that the CPU usage is way too high. The process...
  8. L

    Disk errors on FreeBSD 12.2 guest

    This problem has been re-occuring repeatedly... new thread here
  9. L

    Strange disk behaviour

    Yes, this has been an ongoing problem. When we moved the storage to non-ceph lvm-thin storage, the problem seemed to go away. However, after a couple of weeks it started re-occuring exactly in the same way that it was on ceph rbd storage. There's no specific time. We have had a completed...
  10. L

    Strange disk behaviour

    Reading the whole tread will be helpful. The nvme1 is part of a ceph pool, but the problem occurs regardless of which pool the VM uses, even if I move it to a local ext4 lvm-thin volume, the problem still occurs. Many other virtual machines are using that pool and they don't have any issues.
  11. L

    Strange disk behaviour

    I changed the thread name to better describe the issue.
  12. L

    Strange disk behaviour

    Ok, the user has started his machine again at just after 15:00. syslog has been attached. # pveversion -v proxmox-ve: 7.4-1 (running kernel: 5.15.108-1-pve) pve-manager: 7.4-16 (running version: 7.4-16/0f39f621) pve-kernel-5.15: 7.4-4 pve-kernel-5.13: 7.1-9 pve-kernel-5.3: 6.1-6...
  13. L

    Strange disk behaviour

    As to the size shown issue: Can we have this flagged as an inconsistency to be fixed in an upcoming version? Either the aim should be everything in GiB (prefered?) or else everything in GB. As to the host logs: I'll have to wait for it to happen again and will post it then
  14. L

    Strange disk behaviour

    The vm config shows 130GB allocation for the disk. sata0: speedy:vm-199-disk-0,discard=on,size=130G,ssd=1 The guest is FreeBSD, fsck has been run very often, that's not the issue. The problem is that the processes are running into a "D" state (waiting on disk). Eventually nothing runs on the...
  15. L

    Strange disk behaviour

    We're experiencing a problem with a FreeBSD KVM guest that works 100% on installation, but after a while starts complaining that it can't write to the disk anymore. What we have done so far: Moved the disk image off ceph to a lvm-thin volume Changed the disk from Virtio-SCSI to SATA and also...
  16. L

    [SOLVED] Ballooning memory: How to retrieve the max ram allowed from the guest OS?

    Found it, thanks to someone on Reddit: dmidecode --type memory | grep "Maximum Capacity"
  17. L

    [SOLVED] Ballooning memory: How to retrieve the max ram allowed from the guest OS?

    Scenario: Centos Guest OS with 8GB/24GB RAM as min/max allocated. The machine typically uses between 10GB and 12GB of the allowed RAM due to ballooning, but here's a problem: Using free -h shows only 14GB in total available. Can't find anything else that shows the 24GB max allowed. There are...
  18. L

    proxmox-backup-proxy rrd EINVAL error

    I'm getting the error below after something happened (it was not happening before) and not sure that I changed anything deliberately. It prevent the status graphs (rrd, right?) to be displayed on the PBS administration section. Oct 11 22:07:17 pbs3 systemd[1]: Starting...
  19. L

    Can one set PBS priority lower to prevent guest slowdowns?

    Thanks for this, @Chris! I have implemented a local PBS and will be running sync jobs to pull the backups off-site. I see how it goes in the next few days.
  20. L

    Can one set PBS priority lower to prevent guest slowdowns?

    I have run into an issue a couple of times in that guest OS's slow down dramatically if the PBS server doesn't perform for whatever reason. Previously I had a network issue, which prevented backups from being written at a reasonable speed and it caused the guest machines being backed up to...