Recent content by SimonR

  1. S

    ZFS slow after upgrade to 2.2.0 in latest Proxmox Update

    Hi all, in the past we had some problems with the performance of ZFS slowing down after a few minutes, after a long ride, we decided to turn off the sync-Feature in our pool and that solved the problem for a long time. After the last upgrade of proxmox and ZFS 2.2.0, exactly the same problem...
  2. S

    Cannot destroy dataset xxx: dataset already exists

    joa, kann aber momentan innerhalb Proxmox auch passieren, wenn man einen verschlüsselten ZFS-Pfad als pve storage nutzt und Replication aktiviert hat. Denn dort werden ja auch inkrementelle Snaps gesynct.
  3. S

    Dataset cannot be destroyed with error "dataset already exists"

    No chance, it's a ZFS Bug, really, they will implement and release the fix soon, I think. But the zfs vol will stay there without any chance of deleting it. It will live forever now with childdata without child. https://github.com/openzfs/zfs/pull/14119#issuecomment-1331985760
  4. S

    Cannot destroy dataset xxx: dataset already exists

    Das ist/war wohl echt noch ein Bug im ZFS, hier beschrieben. Gelangt der Bugfix auch in die Proxmox ZFS Version? https://github.com/openzfs/zfs/pull/14119 Los wird man die Volumes oder Datasets wohl nicht mehr... konnte mir Gott sei Dank anders helfen, zwar nicht ganz sauber, aber ohne Pool...
  5. S

    Cannot destroy dataset xxx: dataset already exists

    Das dachte ich auch erst, aber da läuft nichts mehr, den Server habe ich ja auch mehrmals neu gestartet. ps aux | grep zfs zeigt keine Prozesse. Es gibt im Dataset auch keine "verlorenen" Resume-Tokens. Aufgetreten ist das wohl, weil ich das Dataset von einem anderen Server repliziert habe...
  6. S

    Dataset cannot be destroyed with error "dataset already exists"

    Hi all, after a bad sync with zfs-autobackup I cannot delete 2 ZFS-datasets on my server. There are no snapshots or clones existing for these datasets, but it still shows USEDCHILD-space, and I'm wondering why. I'm able to rename the datasets. I can snapshot these 2 datasets too, and I can...
  7. S

    Cannot destroy dataset xxx: dataset already exists

    Hallo zusammen, nach einem mißglückten Sync mit zfs-autobackup kann ich auf meinem Target-Server die ZFS-Datasets nicht löschen. Das merkwürdigste daran ist, dass ich vorher alle Snapshots der Datasets gelöscht habe, aber trotzdem bei USEDCHILD Daten angezeigt werden, obwohl keine Children...
  8. S

    Storage replication notification mail for every SUCCESS also possible?

    Hi all, is there an easy way to trigger the replication to send an info mail also after a successful replication? I receive notifications in the case of failure at the moment, but I would like to change it in a way like it's possible in the backup jobs to "always notify". I need that for...
  9. S

    Nice little sh-script to monitor LVM-thin space with warn mail

    With these little 2 scripts you can monitor your LVM-thin space to be aware of snapshots killing your overprovisioned storage. Save each script as .sh-File, chmod +x the file and it's directly ready to use after finding the attribute output in LVS command. Change the mail-adress, it runs fine...
  10. S

    Win2019 Server PCie passthrough boot problems since last Proxmox update to 7.2 - \boot\bcd 0xc00000e9

    Rebuilding with older kernels? And living with security problems, or hoping, that the over-next kernel-update will solve this? We solved it with changing passthrough NVME to a Proxmox LVM-thin on this NVME.
  11. S

    Win2019 Server PCie passthrough boot problems since last Proxmox update to 7.2 - \boot\bcd 0xc00000e9

    We just kicked the plan to passthrough it again, I'm angry, my customer is more angry, but I warned him a few months before to not passthrough PCIe NVME. But it was everythink o.k. for more than 6 months. I don't know if a linux vm would run, cause we don't need a linux vm, and we will not spend...
  12. S

    Win2019 Server PCie passthrough boot problems since last Proxmox update to 7.2 - \boot\bcd 0xc00000e9

    Hi all, since the last Proxmox Updates we had problems with a Win2019 Server using an Intel Optane PCIe-passthrough-device as boot volume. First we thought, it was an Windows-Update-Error, but it's not. A fresh install on a new VM of Win2019 Server runs normal till it reboots. The reboot ends...
  13. S

    Problems with replication since the last proxmox VE 7.1 update

    it must had something to do with the PVE Manager 7.1.4, after upgrading to 7.1.6 the replications are turning back to normal operation and the symptoms in the upper post are gone. I had it on 3 different clusters, now all is solved. For me it's now important to watch after the replication...
  14. S

    VM gelöscht -> ZFS Partition weg :(

    wenn du ihn mit "zfs list" nicht mehr siehst, ist er futsch. ZFS snapshots beziehen sich ja immer auf die Maschine, bzw. auf deren VHDs. Und wenn du die Maschine löschst und die VHDs dazu, wäre der zugehörige Snapshot auch futsch.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!