Hi all, in the past we had some problems with the performance of ZFS slowing down after a few minutes, after a long ride, we decided to turn off the sync-Feature in our pool and that solved the problem for a long time.
After the last upgrade of proxmox and ZFS 2.2.0, exactly the same problem...
joa, kann aber momentan innerhalb Proxmox auch passieren, wenn man einen verschlüsselten ZFS-Pfad als pve storage nutzt und Replication aktiviert hat. Denn dort werden ja auch inkrementelle Snaps gesynct.
No chance, it's a ZFS Bug, really, they will implement and release the fix soon, I think. But the zfs vol will stay there without any chance of deleting it. It will live forever now with childdata without child.
https://github.com/openzfs/zfs/pull/14119#issuecomment-1331985760
Das ist/war wohl echt noch ein Bug im ZFS, hier beschrieben. Gelangt der Bugfix auch in die Proxmox ZFS Version?
https://github.com/openzfs/zfs/pull/14119
Los wird man die Volumes oder Datasets wohl nicht mehr... konnte mir Gott sei Dank anders helfen, zwar nicht ganz sauber, aber ohne Pool...
Das dachte ich auch erst, aber da läuft nichts mehr, den Server habe ich ja auch mehrmals neu gestartet.
ps aux | grep zfs
zeigt keine Prozesse. Es gibt im Dataset auch keine "verlorenen" Resume-Tokens.
Aufgetreten ist das wohl, weil ich das Dataset von einem anderen Server repliziert habe...
Hi all,
after a bad sync with zfs-autobackup I cannot delete 2 ZFS-datasets on my server.
There are no snapshots or clones existing for these datasets, but it still shows USEDCHILD-space, and I'm wondering why.
I'm able to rename the datasets. I can snapshot these 2 datasets too, and I can...
Hallo zusammen,
nach einem mißglückten Sync mit zfs-autobackup kann ich auf meinem Target-Server die ZFS-Datasets nicht löschen.
Das merkwürdigste daran ist, dass ich vorher alle Snapshots der Datasets gelöscht habe, aber trotzdem bei USEDCHILD Daten angezeigt werden, obwohl keine Children...
Hi all,
is there an easy way to trigger the replication to send an info mail also after a successful replication?
I receive notifications in the case of failure at the moment, but I would like to change it in a way like it's possible in the backup jobs to "always notify".
I need that for...
With these little 2 scripts you can monitor your LVM-thin space to be aware of snapshots killing your overprovisioned storage. Save each script as .sh-File, chmod +x the file and it's directly ready to use after finding the attribute output in LVS command. Change the mail-adress, it runs fine...
Rebuilding with older kernels? And living with security problems, or hoping, that the over-next kernel-update will solve this? We solved it with changing passthrough NVME to a Proxmox LVM-thin on this NVME.
We just kicked the plan to passthrough it again, I'm angry, my customer is more angry, but I warned him a few months before to not passthrough PCIe NVME. But it was everythink o.k. for more than 6 months. I don't know if a linux vm would run, cause we don't need a linux vm, and we will not spend...
Hi all,
since the last Proxmox Updates we had problems with a Win2019 Server using an Intel Optane PCIe-passthrough-device as boot volume. First we thought, it was an Windows-Update-Error, but it's not. A fresh install on a new VM of Win2019 Server runs normal till it reboots. The reboot ends...
it must had something to do with the PVE Manager 7.1.4, after upgrading to 7.1.6 the replications are turning back to normal operation and the symptoms in the upper post are gone. I had it on 3 different clusters, now all is solved. For me it's now important to watch after the replication...
wenn du ihn mit "zfs list" nicht mehr siehst, ist er futsch.
ZFS snapshots beziehen sich ja immer auf die Maschine, bzw. auf deren VHDs. Und wenn du die Maschine löschst und die VHDs dazu, wäre der zugehörige Snapshot auch futsch.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.