Recent content by JamesT

  1. J

    Proxmox VE 7.2 released!

    I can see v7.2-3 in my console but I cannot see the VMID Range under Datacentere > Options. Last available option is still "Maximal Workers/Bulk action". I also cannot see VirGL as an display option, only VirtIO-GPU and all the usual ones. Here are my package versions:
  2. J

    Live-Migration almost freezes Targetnode

    @j4ys0n do you mind if I ask your server specs? My current servers which are still running (which were the topic of my original post earlier in this thread) are Dell R520's. I'm just configuring a new R740 2 node HPC cluster now and hoping having newer hardware will somehow make this problem go...
  3. J

    [SOLVED] Restore from PBS1.0-5 to PVE7.0-11 ground to halt very slow

    Yes, I use zfs for the pbs server. After rebooting PBS server and doing apt dist-upgrade I tried doing the restore from the target node and it completed OK over the weekend. Per your suggestion I've turned on relatime for the pool root@PBSNODE:~# zfs set relatime=on backup-pool. If I get time...
  4. J

    [SOLVED] dirty/new bitmaps

    Thanks for considering my idea =) I'm not sure what type of reading of disk the backup job does, but as I mentioned it can take many hours if dirty bitmap is lost. Whereas A simple MD5 hash (as per the link I sent) can take seconds for a 1GB file and by extrapolation we calculate 1TB file maybe...
  5. J

    [SOLVED] dirty/new bitmaps

    Just an idea - what if you hash the VM files, set a flag, and at start of next backup check hash vs on disk contents ? Hashing large files doesn't take that long for simply comparisons these days...
  6. J

    Restore extremly slow

    I'm having a somewhat similar issue. Did you find out what was wrong @Stefan_Malte_Schumacher ? In my case, 4 VMs restored very quickly and the last one which is bigger started off OK then ground to a halt...
  7. J

    [SOLVED] Restore from PBS1.0-5 to PVE7.0-11 ground to halt very slow

    Hello. I rebuilt a standalone node with PVE7 and did latest updates. This node runs 5 VMs. I restore 4/5 (small Debian VMs - 2GB-5GB used of 32GB volsize) which each restored in <1 minute. I went to do the big windows fileserver ~3TB and it started off pretty fast but then ground to a halt...
  8. J

    [SOLVED] "Can't live migrate VM with local cloudinit disk" but cloudinit is on cluster storage

    I suspect that cluster storage which is the same on each node, is not defined as "shared storage", which would be a central storage that each node connects to and shares. However, why can the migration be done with pool1:vm-101-disk-3 but not with pool1:vm-101-cloudinit ? Thanks in advance.
  9. J

    VM's become unresponsive when backups run or a disk is moved

    Hi @curtsahd , wondering if you ever got to the bottom of this? I'm trying to track down similar issues in my cluster.
  10. J

    Live-Migration almost freezes Targetnode

    Hi @Dominic, I've searched a bit but cannot find how to de*crease ZFS send I/O priority. Can you help me out with the right terms to search, a link to an article, or even the command itself? Many thanks James
  11. J

    Live-Migration almost freezes Targetnode

    @abien Ok, thanks, appreciate the response. We are looking to migrate to ceph too when time permits.
  12. J

    Live-Migration almost freezes Targetnode

    @m.witt did you ever solve this? I've been facing the same or very similar issue with live migrations since we started using proxmox on 6.1 a year ago. Now on 6.3-3 and still having the same issue, live migration slows everything down, IO Delay goes way up, but cluster is still usable and VMs...
  13. J

    failed - no tunnel IP received

    Sorry for not replying Fabian, I somehow didn't get notification. If I recall correctly, I ended up resolving this by rebooting both nodes, then it was fine.