Thanks, it seems that there is a bit of a race condition somewhere with the ZFS mounts. Not sure if this is really about the pve service (which doesn't require zfs to be started first, that's true) or something else... In any case, my quick fix for the time being is a script that I'm running...
I think I did something like
zpool remove pool_sata mirror-1
and then
zpool add mirror <device1> <device2>
As you can see from the pool status above. Removing the mirror worked, ZFS reallocated the data. Then the new mirror was added. However, that reallocation ended immediately, so I don't...
What I have is the output of "zpool status" from the same time during boot:
pool: pool_opt
state: ONLINE
scan: scrub repaired 0B in 0 days 00:02:43 with 0 errors on Sun Dec 13 00:26:44 2020
config:
NAME STATE READ WRITE CKSUM...
I have a striped mirror pool consisting of 4 SATA disks that is not mounted during boot. Syslog gives the error message in the subject above. The error appears to be that for some reason ZFS is taking long to read/import/... the pools. Because after the system has booted, after I manually open a...
If you have a pool of SSDs (or even just a single one) that should be trimmed, and if all data you put onto it is from virtual disks, you probably don't need to do anything else. But if you have a pool that contains much more than just your virtual disks (e.g. also you media collection, mails...
The above discussion suggests that the autotrim feature might not work, and that, in any case, a manually scheduled trim should be better performance-wise.
defrag has nothing to do with trim. Trim just discards unused data blocks on SSDs, defrag tries to reduce fragmentation (mainly helpful on...
I just had this problem again: When I rebooted the server (which I fortunately I don't do too often), the VMs dataset was not mounted, and pve thus didn't start any VMs. The issue was really only that the dataset was not mounted, and could be fixed by issuing
zfs mount pool_opt/VMs
I don't...
Not sure if this is the best forum, but I will give it a try:
I have two 2TB Samsung 860 Pro SSDs here. Both show appr. the same SMART status:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
5 Reallocated_Sector_Ct 0x0033 100 100 010...
Admittedly, it's been a while. After the PME just ran in the meantime with no need to reboot, I just tested out the above suggested changes to storage.cfg. What I found out is that the addition of
is_mountpoint true
leads to the respective ZFS directory (/mnt/zfs_opt/VMs) not being mounted...
Indeed, the journal contained a bit more information. This appears to be related to this other issue I had, where a Directory configured in PVE on a ZFS resource populated the mount directory before ZFS could mount the dataset. And once the mount directory contained something, ZFS threw an error...
Yes, I think that that's what had caused the problems I described in my first mail at the top. After I got it working again (by removing all data from zfs_opt and then recreating the pool and dataset, which was a bit of a challenge, as PVE kept creating the directory too early - but at the end...
Thanks.
Interestingly, the "VMs" Directory is not shown in the GUI (should it?). So I've tweaked storage.cfg manually as follows (full paste this time):
dir: local
path /var/lib/vz
content backup,vztmpl,iso
zfspool: local-zfs
pool rpool/data
content...
@wolfgang: I'm sorry that I have to report that the segfault upon boot is back.
#dmesg | grep zfs
[ 23.442792] traps: zfs[10790] general protection fault ip:7fbe057054a6 sp:7fbdf7ff7310 error:0 in libc-2.28.so[7fbe056a3000+148000]
[ 23.462145] systemd[1]: zfs-mount.service: Main process...
I have an nvme pool that I want to use for
a) storing VMs
b) storing mails (by a mail server run in one of the VMs)
For this purpose, I have created, via command line, a ZFS pool with two datasets, "VMs" and "mail". (mail is obviously shared via nfs with a VM). I have then, in Proxmox GUI...
Yes, I haven't found out, yet, either, how to reproduce this bug with certainty. It's just weird that this is happening from time to time, and gives me a bit of a bad feeling.
The machine here is an EPYC 7502P on a TYAN platform with 256 GB RAM. It uses NVME and SATA SSDs. Boot is from two...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.