Recent content by jnkraft

  1. J

    Disk migration speed is capped by disk read/write limits in PVE 9

    Hello! Thanks for confirming the bug. Will wait for a fix.
  2. J

    Disk migration speed is capped by disk read/write limits in PVE 9

    Hi, thanks for the link — I read through that thread. My limits are set to default/none, and, as I mentioned, I have a reproducible correlation between migration speed and the disk limits. And I upgraded from PVE 8 to PVE 9 last weekend, so the contrast is stark and very recent.
  3. J

    Disk migration speed is capped by disk read/write limits in PVE 9

    I recently upgraded my cluster from PVE8 to PVE9 and noticed that offline and online disk migration (qcow2) between storage backends (NFS) has become SLOW. After a few experiments, I observed that the speed is constant and, surprisingly, matches the Read and Write limits set in the settings of...
  4. J

    I migrated a Windows 7 VM from ESXi to PVE, and it caused the PVE node to hang

    The cluster consists of several compute nodes and an NFS server (based on PVE) as shared storage. DIsk format of the VM is qcow2. Symptoms: Under certain disk loads within the VM, it gradually starts to hang. In the web interface, the "Summary" tab and the console display with significant...
  5. J

    Memory leak(?) on 6.8.4-2/3 PVE 8.2

    Please forgive me and forget about my words, i was little bit upset that noone from Proxmox staff was interested in my problem before this moment:) I'll be glad to be useful in any way. I updated one of compute nodes with non-critical VMs to 8.2/6.8 and after few hours got this in dmesg: [Wed...
  6. J

    Memory leak(?) on 6.8.4-2/3 PVE 8.2

    Hmm, for some reason i didn't receive any notifications on this topic new messages. First, thank you guys, for paying attention to my post. Second, after 3 days of testing and reinstalling PVE on test nodes, i found cause of memory leaks in my case. It's as silly as strange, for me at least...
  7. J

    Memory leak(?) on 6.8.4-2/3 PVE 8.2

    After 14 hours from the first message i lost connection to Node1, this time even the local console died, only blinking cursor.
  8. J

    Memory leak(?) on 6.8.4-2/3 PVE 8.2

    Node1 kernel 6.8.4-3 Node2, kernel 6.5.13-5 pinned Both nodes are used as nfs-storage servers for compute-nodes, so there is no other load on them than nfs-server. PVE is used for OS for unification of infrastructure, basically used it's debian part. Hardware on both nodes differs only on...
  9. J

    Proxmox VE 8.2 released!

    Sorry for silly question, if i map physical interfaces names with systemd.link files, would not it affect vmbr interfaces? All my PVE network is OVS-based, so, for example i have eno1+eno2 in bond and vmbr bridge on top of it; bridge has the same MAC as eno1.
  10. J

    How to move existing PBS to new installation?

    Hi Chris, thanks for your answer! Rightnow datastore lives on lvm on remote iscsi (temporary solution), so my guess just readding iscsi-target to new installation with same paths would do the trick? Later datastore will be migrated to local storage via lvm pvmove. PBS and PVE are co-installed...
  11. J

    How to move existing PBS to new installation?

    I have PBS+PVE(VM with Bacula for file-level backup) server with one host-lv for datastore and another host-lv passed as physical disk to Bacula VM. OS is installed on mdadm raid1 HDDs, i want to do a clean installation on SSD raid on the same hardware server. Moving PVE-part to fresh...
  12. J

    Slow qcow2 disk migration between nfs-storages

    Compute nodes: 2x10gbit LACP, nfs nconnect=16 Storage nodes: 4x10gbit LACP, mdadm 8x8TB enterprise SSDs, nfs daemons count increased to 256 Newtork: ovs because of lots of different VM vlans Migration network is in separate vlan and CIDR on top of ovs-bond I can almost max out fio to underlaying...
  13. J

    Another boring question about ZFS on top of HW raid

    I know it is very controversal subject and generally a bad idea. But maybe i have a little specific case? Very strict hardware resources and no budget at all. Old Proliant server, single PVE node, fully filled disk bay, HBA mode, usual raidz2 ready, this is trivial. It has FC HBA card PtP...