Search results

  1. L

    New install pve 8.2 on Debian 12 certificate blocks GUI

    # cat /etc/hosts 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback ff02::1 ip6-allnodes ff02::2 ip6-allrouters
  2. L

    New install pve 8.2 on Debian 12 certificate blocks GUI

    I have done fresh install on a Debian 12 cloud host and all went well I thought, except that port 8006 is not responding. (I followed the documentation here) I the logs I find this: Jun 04 17:52:23 pmx1 pveproxy[12734]: /etc/pve/local/pve-ssl.pem: failed to use local certificate chain...
  3. L

    Windows Server 2022 reports disk errors on ceph volume

    Sorry, it's now called retrim, which differs from defrag. However, it still kills the machine when doing it on a 1TB drive. It's NVMe storage, can't be too slow. It doesn't happen on any other machine, only on the new WS 2022 installation.
  4. L

    Windows Server 2022 reports disk errors on ceph volume

    The driver update made no difference. However, the scsi-virtio driver allows thin provisioning of the disk volume, which is what we use, so Windows starts a defrag (Edit: it's actually an "optimization") once a week by default and it uses 75% of the available RAM. When some user also log on...
  5. L

    Windows Server 2022 reports disk errors on ceph volume

    We installed a new Windows server 2022 on a cluster that uses an SSD-based ceph volume. All seems to be going well, when suddenly windows event log reports: "An error was detected on device \Device\Harddisk0\DR0 during a paging operation" It's Windows error # 51 There are other Windows...
  6. L

    [SOLVED] Remote server doesn't deduplicate

    I found that the garbage collection job wasn't running. Fixed that and now all is well!
  7. L

    [SOLVED] Remote server doesn't deduplicate

    I have setup a remote server in a different city to which I ship all backups using a sync job. The remote PBS datastore however doens't seem to be doing deduplication. The local PBS. Usage : 91.02% (3.97 TB of 4.37 TB) Backup Count CT : 32 Groups, 380 Snapshots Host : 0 Groups, 0...
  8. L

    pvesh and how to list API endpoints

    I had used https://pve.proxmox.com/wiki/Proxmox_VE_API#Using_.27pvesh.27_to_Access_the_API, which bascially just touches on the how to use pvesh. However, I have now found https://pve.proxmox.com/pve-docs/api-viewer/, which actually documents the whole API. I think it would be a good idea to...
  9. L

    pvesh and how to list API endpoints

    I have seen a couple of blogs out there that claim one can simply use the pvesh command without any parameters and it will drop into an interactive mode where one can show the calls that can be done and a particular level. It doesn't work like that for me though and the documentation is really...
  10. L

    Use API to get storage location for VM's

    Here is what I ended up doing: #!/bin/bash if [ -f vms.config ]; then rm vms.config fi echo 'node,type,name,cores,memory,disk1,virtio1,scsi0,scsi1' >vms.config for i in $(pvesh get /nodes/ --noheader --output-format json|jq -r '.[] .node') do for j in $(/usr/bin/pvesh get /nodes/$i/qemu...
  11. L

    Use API to get storage location for VM's

    So, I have created a basic script to list the storage locations for virtual machines on my Proxmox clusters. I will expand this to include lxc as well, but for now I'm just trying to learn how jq achieves this. #!/bin/bash #IFS=$'\n' rm qemu.config touch qemu.config for i in $(pvesh get /nodes/...
  12. L

    [SOLVED] Not able to retrieve disk' aio information via API

    How did you change the Token permission? I've using pvesh to query the machines configs, but I feel like I'm fumbling around in the dark.
  13. L

    Use API to get storage location for VM's

    Thanks, that gives me a good idea on how to do this. However, it seems that that my pvesh is somehow deficient. When I do: ~# pvesh get config node/vm No 'get' handler defined for 'config' also, I can't enter just ~# pvesh ERROR: no command specified Although that should allow me to browse...
  14. L

    Remote PBS log shows error, but all processes look completed

    Can anyone see what causes this error? 2023-12-18T13:00:07+02:00: percentage done: 98.18% (54/55 groups) 2023-12-18T13:00:07+02:00: sync group vm/199 2023-12-18T13:00:07+02:00: re-sync snapshot vm/199/2023-11-20T08:36:28Z 2023-12-18T13:00:07+02:00: no data changes 2023-12-18T13:00:07+02:00...
  15. L

    Use API to get storage location for VM's

    Could you point in the right direction with the config of each VM? There doesn't seem to be a way to query that via the API, is there? Or did you mean I should use the config file for that VM and get it with bash and grep or something like that?
  16. L

    Use API to get storage location for VM's

    I need to extract which storage is assigned to each VM and LXC in our cluster. I can retrieve the total allocation for the boot disk, but can't see an obvious way to get the detail for each storage volume allocated. Some of our VM's have a boot disk on an ceph SSD pool and a logging disk on...
  17. L

    Strange disk behaviour

    Here's what my drives report: # nvme id-ns -H /dev/nvme0n1 | grep "Relative Performance" LBA Format 0 : Metadata Size: 0 bytes - Data Size: 512 bytes - Relative Performance: 0 Best (in use) I used that man page to create a special rbd volume for small writes to see if it improves the...
  18. L

    Strange disk behaviour

    If the problem only occures with ceph storage, then I would suspect that my ceph may not be able to handle it. But the intel-ssd is not a ceph volume and it happens there as much as it does on ceph storage. The poller writes many small files quite often. I'll forward some sample and a...
  19. L

    Strange disk behaviour

    I found rbd migration prepare. However # rbd migration prepare --object-size 4K --stripe-unit 64K --stripe-count 2 standard/vm-199-disk-0 standard/vm-199-disk-1 give me an error: 2023-11-22T13:04:18.177+0200 7fd9fe1244c0 -1 librbd::image::CreateRequest: validate_striping: stripe unit is not a...
  20. L

    Strange disk behaviour

    More than that, can I create a ceph rbd pool that has a 4096 block size as well, for this type of virtual machine? I don't see any parameter in the pool creation process that would allow me to set that. I do have this is my ceph.conf [osd] bluestore_min_alloc_size = 4096...