Search results

  1. ThinkAgain

    Wrong approach to ZFS dataset for VM image storage?

    Thanks, it seems that there is a bit of a race condition somewhere with the ZFS mounts. Not sure if this is really about the pve service (which doesn't require zfs to be started first, that's true) or something else... In any case, my quick fix for the time being is a script that I'm running...
  2. ThinkAgain

    pvestatd: zfs error: cannot open 'pool_sata': no such pool

    I think I did something like zpool remove pool_sata mirror-1 and then zpool add mirror <device1> <device2> As you can see from the pool status above. Removing the mirror worked, ZFS reallocated the data. Then the new mirror was added. However, that reallocation ended immediately, so I don't...
  3. ThinkAgain

    pvestatd: zfs error: cannot open 'pool_sata': no such pool

    What I have is the output of "zpool status" from the same time during boot: pool: pool_opt state: ONLINE scan: scrub repaired 0B in 0 days 00:02:43 with 0 errors on Sun Dec 13 00:26:44 2020 config: NAME STATE READ WRITE CKSUM...
  4. ThinkAgain

    pvestatd: zfs error: cannot open 'pool_sata': no such pool

    I have a striped mirror pool consisting of 4 SATA disks that is not mounted during boot. Syslog gives the error message in the subject above. The error appears to be that for some reason ZFS is taking long to read/import/... the pools. Because after the system has booted, after I manually open a...
  5. ThinkAgain

    zpool trim in Proxmox 6.1

    If you have a pool of SSDs (or even just a single one) that should be trimmed, and if all data you put onto it is from virtual disks, you probably don't need to do anything else. But if you have a pool that contains much more than just your virtual disks (e.g. also you media collection, mails...
  6. ThinkAgain

    zpool trim in Proxmox 6.1

    The above discussion suggests that the autotrim feature might not work, and that, in any case, a manually scheduled trim should be better performance-wise. defrag has nothing to do with trim. Trim just discards unused data blocks on SSDs, defrag tries to reduce fragmentation (mainly helpful on...
  7. ThinkAgain

    ZFS boot stuck at (initramfs), no root device specified

    Yes, setting the boot mode from DUAL to UEFI in BIOS before (re)installation of PVE worked for me as well on a Supermicro X10SRL-F.
  8. ThinkAgain

    Wrong approach to ZFS dataset for VM image storage?

    I just had this problem again: When I rebooted the server (which I fortunately I don't do too often), the VMs dataset was not mounted, and pve thus didn't start any VMs. The issue was really only that the dataset was not mounted, and could be fixed by issuing zfs mount pool_opt/VMs I don't...
  9. ThinkAgain

    SSD Wear Out Calc?

    Not sure if this is the best forum, but I will give it a try: I have two 2TB Samsung 860 Pro SSDs here. Both show appr. the same SMART status: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 5 Reallocated_Sector_Ct 0x0033 100 100 010...
  10. ThinkAgain

    CPU Temperatur an VM weitergeben

    A detailed example from the web: https://kleypot.com/proxmox-home-assistant-host-system-hardware-monitoring/
  11. ThinkAgain

    Wrong approach to ZFS dataset for VM image storage?

    Admittedly, it's been a while. After the PME just ran in the meantime with no need to reboot, I just tested out the above suggested changes to storage.cfg. What I found out is that the addition of is_mountpoint true leads to the respective ZFS directory (/mnt/zfs_opt/VMs) not being mounted...
  12. ThinkAgain

    [SOLVED] PVE 6.1: ZFS Segfault upon system boot

    Indeed, the journal contained a bit more information. This appears to be related to this other issue I had, where a Directory configured in PVE on a ZFS resource populated the mount directory before ZFS could mount the dataset. And once the mount directory contained something, ZFS threw an error...
  13. ThinkAgain

    Wrong approach to ZFS dataset for VM image storage?

    Yes, I think that that's what had caused the problems I described in my first mail at the top. After I got it working again (by removing all data from zfs_opt and then recreating the pool and dataset, which was a bit of a challenge, as PVE kept creating the directory too early - but at the end...
  14. ThinkAgain

    Wrong approach to ZFS dataset for VM image storage?

    # zfs list NAME USED AVAIL REFER MOUNTPOINT pool_opt 147G 528G 208K /mnt/zfs_opt pool_opt/VMs 108G 528G 102G /mnt/zfs_opt/VMs pool_opt/mail 38.7G 528G 38.7G /mnt/zfs_opt/mail pool_sata...
  15. ThinkAgain

    Wrong approach to ZFS dataset for VM image storage?

    Thanks. Interestingly, the "VMs" Directory is not shown in the GUI (should it?). So I've tweaked storage.cfg manually as follows (full paste this time): dir: local path /var/lib/vz content backup,vztmpl,iso zfspool: local-zfs pool rpool/data content...
  16. ThinkAgain

    [SOLVED] PVE 6.1: ZFS Segfault upon system boot

    @wolfgang: I'm sorry that I have to report that the segfault upon boot is back. #dmesg | grep zfs [ 23.442792] traps: zfs[10790] general protection fault ip:7fbe057054a6 sp:7fbdf7ff7310 error:0 in libc-2.28.so[7fbe056a3000+148000] [ 23.462145] systemd[1]: zfs-mount.service: Main process...
  17. ThinkAgain

    Wrong approach to ZFS dataset for VM image storage?

    I have an nvme pool that I want to use for a) storing VMs b) storing mails (by a mail server run in one of the VMs) For this purpose, I have created, via command line, a ZFS pool with two datasets, "VMs" and "mail". (mail is obviously shared via nfs with a VM). I have then, in Proxmox GUI...
  18. ThinkAgain

    [SOLVED] PVE 6.1: ZFS Segfault upon system boot

    Thanks @wolfgang This is just to confirm that I haven't seen a segfault anymore since updating to 5.4. So this has probably fixed it!
  19. ThinkAgain

    [SOLVED] PVE 6.1: ZFS Segfault upon system boot

    Yes, I haven't found out, yet, either, how to reproduce this bug with certainty. It's just weird that this is happening from time to time, and gives me a bit of a bad feeling. The machine here is an EPYC 7502P on a TYAN platform with 256 GB RAM. It uses NVME and SATA SSDs. Boot is from two...
  20. ThinkAgain

    [SOLVED] PVE 6.1: ZFS Segfault upon system boot

    Sure, here you go: # pveversion -v proxmox-ve: 6.1-2 (running kernel: 5.3.18-2-pve) pve-manager: 6.1-8 (running version: 6.1-8/806edfe1) pve-kernel-helper: 6.1-7 pve-kernel-5.3: 6.1-5 pve-kernel-5.3.18-2-pve: 5.3.18-2 ceph-fuse: 12.2.11+dfsg1-2.1+b1 corosync: 3.0.3-pve1 criu: 3.11-3...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!