Search results

  1. L

    VM backup recover from chunks.

    Hi. This afternoon I removed backups of a VM and now it turns out these were possibly the only means I had to recover the deleted VM with. I'm wondering if there is a controlled way to use garbage collection to isolate chunks which are not now attached to any backups. That would at least give me...
  2. L

    ceph warning post upgrade to v8

    Thanks for your time/work here Max, do these fixes only apply to Proxmox packaged Ceph or should they have worked down from upstream Ceph as well? I've just upgraded to 18.2.4-1~bpo12+1 from download.ceph.com/debian-reef and am seeing the same dashboard problems as well covered in this thread...
  3. L

    Metric Server doesn't send all LXC and VMs

    So I looked at the VMs which I'm not seeing diskio read stats for and couldn't see a connection so I tried to shutdown/restart a few and lo and behold the stats appeared again. So they must have got into some state which prevented only those stats from being collected. Strange but I now know...
  4. L

    Metric Server doesn't send all LXC and VMs

    Seeing something similar here. Diskio read stats have gone for about 1/3 of my 330 or so VMs but strangely disk write stats are still for all. I can probe the influx proxmox db and see there's host entries for the missing diskio read hosts but with zero values on every measurement.
  5. L

    Ceph Octopus / Debian bullseye

    Thanks Fabian, that looks good to me. Just some minor packaging tweaks is all I can see.
  6. L

    Ceph Octopus / Debian bullseye

    Hi, I have a Ceph cluster which I need to upgrade to Pacific shortly. There's about 400+ PVE VMs running on top of this storage via an rbd connection, the setup is not hyper-converged with Proxmox though. I'd want to upgrade the Debian OS on the Ceph machines to bullseye before running the...
  7. L

    Rate Limiting

    Thanks Mira, we are indeed using OVS so I suspect this is the issue.
  8. L

    Rate Limiting

    Thanks. I've set the limit in the VM config against the 'net0' device: balloon: 512 boot: cdn bootdisk: virtio0 cores: 2 ide2: none,media=cdrom memory: 2048 name: XXXXXX net0: virtio=E6:2A:96:74:9E:75,bridge=vmbr0,rate=4,tag=730 onboot: 1 ostype: l26 smbios1...
  9. L

    Rate Limiting

    Sorry, should have been clearer with those shots. The first is for the VM set to 4 MB/s and the second is for the backup server traffic. I take your point on the second, but the first doesn't look correct to me. Is it possible to trigger logging on the HVs to identify if/when per-VM limits are...
  10. L

    Rate Limiting

    Hello, I've recently wanted to apply rate-limiting to a particular VM and have set it to 4 MB/s however I can see from prometheus graphing that this VM still hits upwards of 25 MB/s for at least 3 or 4 minutes each morning. Likewise, we have two Proxmox Backup servers and on these I've...
  11. L

    Single-file restore and LVM

    Thanks. PVE 7 upgrade it is then.
  12. L

    Single-file restore and LVM

    hi. Should this be working now with Package: proxmox-backup-file-restore Architecture: amd64 Version: 1.1.12-1 I'm still seeing this. We have a single VG in the VM I'm working with, only letters in the name. "proxmox-file-restore failed: Error: mounting 'drive-virtio0.img.fidx/part/["1"]'...
  13. L

    VM crashes with live migration

    So the HV which always seems to be involved in these VM crashes has an "EPYC 7452 32-Core" CPU whereas the others all have Xeons (E5-2620) or similar. The EPYC machine was memtested for over 100hours this weekend and no errors were found. This post is potentially highlighting the problem though...
  14. L

    VM crashes with live migration

    Thanks Thomas. These are all Supermicro machines with Xeon E5-2620 and 128 or 256G RAM. We have no reason to suspect that all of these 8 machines have suddenly developed an issue affecting all of them. It seems that the problem with live migration exists across them all. My suspicion is that...
  15. L

    VM crashes with live migration

    Today we're experiencing an issue with live migration between HVs wherein the VMs (Debian Linux - Buster or Stretch) crash several minutes after the move. The two HVs primarily affected (8 in the cluster, not all tested so far) are running all the versions below and have both today been rebooted...
  16. L

    [SOLVED] Bulk change 'root' user email address

    Ah thanks, that's what I was looking for!
  17. L

    [SOLVED] Bulk change 'root' user email address

    I have many proxmox HVs running and I need to change the 'root@pam' user email address on all of them. I've googled a fair bit on this to no avail, does anyone know if there's a pve* command which I can run (via ansible) to do this? Thanks in advance.