Search results

  1. K

    Disk pass through or ZFS datasets

    A lot of guides suggest passing through physical disks to VMs when people want to run things like TrueNAS. But what if you want to use your HDDs for more than just a NAS like for instance log devices to reduce "less important" writes to SSDs or PBS? My gut says I should just setup the RAIDZ at...
  2. K

    [SOLVED] NFS server in LXC

    Sorry for the very delayed reply. If you follow @unclevic instructions you might as well install directly on the host there is no difference since all the guardrails are removed and the service ties in with the host kernel. Solution 1 from @lz114 on the other hand would not make an unsecure...
  3. K

    ZFS no pools available yet ONLINE import status, I/O error on Import attempt

    For anyone in the future - The following was the sequence of actions that worked for me: echo 0 > /sys/module/zfs/parameters/spa_load_verify_metadata echo 0 > /sys/module/zfs/parameters/spa_load_verify_data zpool import rpool -f -o readonly=on -R /mnt # mounted the key volume from the gnome...
  4. K

    ZFS no pools available yet ONLINE import status, I/O error on Import attempt

    Hey @colinstu did you ever do a full write up? I am currently trying to resolve a similar situation with the added complication of encrypted ZFS. Were you able to make the zpool importable again (I believe that once I manage to import decrypting should be possible). Thanks!
  5. K

    [Server migration] How should I approach this?

    In the end I got it working by from the chroot (which had /sys and /dev bind mounted) reformatting, reinitinlizing and updating the boot partition(s). I actually had an error with one partition so I need to double check that *both* SSDs actually have working boot partitions but this is already...
  6. K

    [Server migration] How should I approach this?

    Please note that I have run https://forum.proxmox.com/threads/proxmox-rescue-disk-trouble.127585/#post-557888 I have also tried to chroot into the resulting mount of rpool and run `proxmox-boot-tool status` and `proxmox-boot-tool refresh` the output as I understand it seems to suggest all is...
  7. K

    [Server migration] How should I approach this?

    At the moment I'm still trying to fix boot issues what I ended up doing so far - 1. Connect old mirror to sata ports of new motherboard 2. Boot ubuntu 25.04 live (just what I happened to have an ISO of) 3. Create GPT partition table and 3 partitions (1M - bios_boot, 1G - EFI, the rest) 3. add...
  8. K

    [Server migration] How should I approach this?

    Just putting all the old drives on the new motherboard is not possible since it "only" 9 SATA and I am using 10, also as said I saw this as a nice opportunity to upgrade the zfs-mirror used for proxmox and promary storage to nvme. I could have a degraded OS disk and migrate it to the NVME...
  9. K

    [Server migration] How should I approach this?

    (Sorry about the vague title I was a bit unsure what to use, even writing this post is brainstorming for me) I have a single proxmox host in my homelab it has 10 SATA SSDs split as follows: - Proxmox OS + majority of guest OS disks sit on a 2 device zfs mirror - Some guest VMs have data living...
  10. K

    [heartbeat] Are alternative interfaces supported?

    Thanks for the fast reply! I really liked the idea of simple cables since that is the least possible things that can break but I guess it was not to be.
  11. K

    [heartbeat] Are alternative interfaces supported?

    I was wondering is it possible to leverage other interfaces like USB, serial etc for the corosync heartbeat in a proxmox cluster? Like this you may be able to avoid having a switch which can also fail in your heartbeat path and just have a mesh (for small 3 node clusters you would only need 3...
  12. K

    [SOLVED] NFS server in LXC

    I think you are 100% correct, unless you use nfs-ganesha you are probably worse off by using a container because you are providing a server that ties in to your kernel on the host so all "benefits" of containers/vms go out the window.
  13. K

    [debating] Should I run nutd directly on PVE or as a VM/container?

    If it were a networked UPS but here it's just USB so if the host is down there is no monitoring unless I physically plug it in elsewhere, for me the way to move easily would probably just be to build an ansible automation, the server doesn't store anything as far as I recall and if I had clients...
  14. K

    [debating] Should I run nutd directly on PVE or as a VM/container?

    That is exactly my instinct, but because others were doing it that way (there is even a topic about the LXC route on this forum) I was questioning myself.
  15. K

    [debating] Should I run nutd directly on PVE or as a VM/container?

    Hey everyone, I have a UPS connected directly to my PVE host by USB, I would like to monitor it with nutd, my gut is that since nutd is a small daemon and also since if the battery gets low it needs to anyhow do drastic things like shutting down the host it should just run directly on the host...
  16. K

    [SOLVED] Transfer root filesystem to ZFS mirror or mdraid mirror?

    Just wanted to check back in - I did what was suggested, linked the disks to a vm and installed proxmox. After that I still had to modify the grub command line to enable serial output and of course fix the network config. As for the issue I had with /etc/pve it turns out that in my zeal to...
  17. K

    [SOLVED] Transfer root filesystem to ZFS mirror or mdraid mirror?

    I just realized I missed the memo that /etc/pve comes from config.db, I have attempted to copy config.db from the old install that did not give me the desired outcome, then I tried "unifying" the old and new config.db (ie. copy just those rows from the old that contain info I want and fixing...
  18. K

    [SOLVED] Transfer root filesystem to ZFS mirror or mdraid mirror?

    Ok turns out I should have added it from the WUI and not by editing storage.cfg
  19. K

    [SOLVED] Transfer root filesystem to ZFS mirror or mdraid mirror?

    Hehe total inception this.... I'm finding all kinds of interesting differences between my old install (converted Debian) to the "native" install, for instance no /etc/pve/storage.cfg At the moment I'm trying to get the new install to recognize the old one to import the VMs so far no luck :/ I...