Recent content by SimonR

  1. S

    Behaviour of locked VMs in case of HA migration

    Hello everyone. My question is, how is a locked VM (by backup or config change) behaving, if it's node is fencing or has a hardware error? The HA service is set to "running". Will it be shown as locked with unchanged configuration on another node. After my readings of the HA mechanism in case...
  2. S

    Kernel 6.8.5.2 and 6.8.5.3 consumes MORE memory than 6.5

    Problem is solved in Kernel 6.8.8.4, no CIFS memory leaks. The RAM usage is stable now. But: How can this kernel come to the productive (hardly tested?) repo? Someone was sleeping while testing 1-3 runs of a simple backupjob to a smb/cifs-share? ;)
  3. S

    Kernel 6.8.5.2 and 6.8.5.3 consumes MORE memory than 6.5

    O.K. Thank you, for your info: problem is still existing in Kernel 6.8.8.2. Increasing RAM started again with change to the new kernel.
  4. S

    Kernel 6.8.5.2 and 6.8.5.3 consumes MORE memory than 6.5

    O.K. I will wait for the next kernel update and post here, if the new kernel solves this problem I have.
  5. S

    Kernel 6.8.5.2 and 6.8.5.3 consumes MORE memory than 6.5

    It's mostly crashing in a normal backup job to a Windows-SMB-Share. After every backup-job there are about 300MB more consumed in total. But with the oder 6.5 Kernel there never was any problem with the SMB-Shares during the backup. And if I switch back the same PVE to the 6.5-Kernel, it is all...
  6. S

    Kernel 6.8.5.2 and 6.8.5.3 consumes MORE memory than 6.5

    No, it's a simple ZFS Volume, I'm running the older 6.5.13-5 Kernel now, and without changing anything, there is no problem, the RAM usage is not increasing there. I'm waiting for further official kernel updates now after 6.8.4-4. First I thought, it might have something to do with a backup...
  7. S

    Kernel 6.8.5.2 and 6.8.5.3 consumes MORE memory than 6.5

    The problem is still there and it has something to do with qemu, I think. Every day there is a bit more RAM consumed and after a week or so the whole PVE is crashing an rebooting. The RAM screenshot shows it. We have two PVE in a little cluster, one of it is running only a small Win2019...
  8. S

    Kernel 6.8.5.2 and 6.8.5.3 consumes MORE memory than 6.5

    And again, just to clearify, noone else notices this? Left side, Kernel 6.8, right side Kernel 6.5
  9. S

    Kernel 6.8.5.2 and 6.8.5.3 consumes MORE memory than 6.5

    Many problems I've read now with the new 6.8.5.2 and 6.8.5.3 Kernel. In my case the server hops away and is rebooting in a simple backupjob, that has run for years through all other kernels. I think there must be something wrong with the memory management of the new kernel. In some hosts I also...
  10. S

    ZFS slow after upgrade to 2.2.0 in latest Proxmox Update

    Hi all, in the past we had some problems with the performance of ZFS slowing down after a few minutes, after a long ride, we decided to turn off the sync-Feature in our pool and that solved the problem for a long time. After the last upgrade of proxmox and ZFS 2.2.0, exactly the same problem...
  11. S

    Cannot destroy dataset xxx: dataset already exists

    joa, kann aber momentan innerhalb Proxmox auch passieren, wenn man einen verschlüsselten ZFS-Pfad als pve storage nutzt und Replication aktiviert hat. Denn dort werden ja auch inkrementelle Snaps gesynct.
  12. S

    Dataset cannot be destroyed with error "dataset already exists"

    No chance, it's a ZFS Bug, really, they will implement and release the fix soon, I think. But the zfs vol will stay there without any chance of deleting it. It will live forever now with childdata without child. https://github.com/openzfs/zfs/pull/14119#issuecomment-1331985760
  13. S

    Cannot destroy dataset xxx: dataset already exists

    Das ist/war wohl echt noch ein Bug im ZFS, hier beschrieben. Gelangt der Bugfix auch in die Proxmox ZFS Version? https://github.com/openzfs/zfs/pull/14119 Los wird man die Volumes oder Datasets wohl nicht mehr... konnte mir Gott sei Dank anders helfen, zwar nicht ganz sauber, aber ohne Pool...
  14. S

    Cannot destroy dataset xxx: dataset already exists

    Das dachte ich auch erst, aber da läuft nichts mehr, den Server habe ich ja auch mehrmals neu gestartet. ps aux | grep zfs zeigt keine Prozesse. Es gibt im Dataset auch keine "verlorenen" Resume-Tokens. Aufgetreten ist das wohl, weil ich das Dataset von einem anderen Server repliziert habe...
  15. S

    Dataset cannot be destroyed with error "dataset already exists"

    Hi all, after a bad sync with zfs-autobackup I cannot delete 2 ZFS-datasets on my server. There are no snapshots or clones existing for these datasets, but it still shows USEDCHILD-space, and I'm wondering why. I'm able to rename the datasets. I can snapshot these 2 datasets too, and I can...