Search results

  1. R

    Sharing ZFS dataset via SMB on Proxmox using CT turnkey fileserver

    Bind mount from the host to the LXC, then share from there. You'd have a line in your conf something like: mp0: /tank/backups,mp=/mnt/backups,replicate=0 Then inside the LXC, you share /mnt/backups If I remember correctly, bind mounts are not recursive, so you can't just bind mount /tank, you...
  2. R

    Usage of Device Mount Points to LXC for S.M.A.R.T. monitoring and notification app (Scrutiny). Is it ok?

    Did you ever find out if you could do this @elkemper? I'm curious about giving this a try myself.
  3. R

    [TUTORIAL] PVE 8.22 / Kernel 6.8 and NVidia vGPU

    I don't know anything about the A100. Does it require different drivers than the A10? https://docs.nvidia.com/grid/16.0/grid-vgpu-release-notes-generic-linux-kvm/index.html Here they explicitly mention support for the A10.
  4. R

    [TUTORIAL] PVE 8.22 / Kernel 6.8 and NVidia vGPU

    Thanks for this. I just ran thru this without error. Note that 535.161.05.patch is available on PolloLoco's site ./NVIDIA-Linux-x86_64-535.161.05-vgpu-kvm.run --dkms -m=kernel --uninstall proxmox-boot-tool kernel unpin reboot, run uname -r to verify kernel 6.8 was loaded after the reboot...
  5. R

    HA Node - Maintenance Mode UI

    Edit: Sorry, I didn't mean to post this in the Proxmox Backup Server forum. I don't see any way for me to move it or even delete it. This is probably more of a feature request, however, being new to HA, I didn't know there was a way to put a node in maintenance mode. It would be nice if there...
  6. R

    Quorum Node vs Qdevice for 2 node cluster

    Unless I'm mistaken, HA via Replication requires local ZFS storage. https://pve.proxmox.com/wiki/Storage_Replication I would recommend, if this is something that the OP really wants, then they should keep an eye on the secondary market for few enterprise level SSD drives. That's what I did.
  7. R

    ZFS block cloning/dnode_is_dirty bug

    I think that block_cloning needs to be enabled with zpool upgrade on existing pools which is why you have different results.
  8. R

    Opt-in Linux 6.5 Kernel with ZFS 2.2 for Proxmox VE 8 available on test & no-subscription

    I started a thread about this before I thought to check this one. However, Seems someone can repo it even with zfs_dmu_offset_next_sync=0. https://github.com/openzfs/zfs/issues/15526#issuecomment-1826065538 Seems this might be the fix...
  9. R

    ZFS block cloning/dnode_is_dirty bug

    I just stumbled over this and I was wondering how it relates to the versions of zfs here in the proxmox kernel? It seems like there is a bug that's been around that seemed a while that was brought to light with the latest block cloning code, or maybe they are 2 different bugs, I don't really...
  10. R

    [Proxmox 7.2-3 - CEPH 16.2.7] Migrating VMs hangs them (kernel panic on Linux, freeze on Windows)

    So far, seems to have resolved my migration issue: https://forum.proxmox.com/threads/opt-in-linux-5-19-kernel-for-proxmox-ve-7-x-available.115090/post-499008
  11. R

    Opt-in Linux 5.19 Kernel for Proxmox VE 7.x available

    It looks like it has resolved my migration issues to/from an i7-12700K and i7-8700K machine.
  12. R

    [Proxmox 7.2-3 - CEPH 16.2.7] Migrating VMs hangs them (kernel panic on Linux, freeze on Windows)

    5.15.53-1-pve doesn't work for me. Migrating a VM from an i7-12700K to an i7-8700K did the typical 100% cpu thing. Back to 5.15.39-3-pve-guest-fpu
  13. R

    [Proxmox 7.2-3 - CEPH 16.2.7] Migrating VMs hangs them (kernel panic on Linux, freeze on Windows)

    I rolled back both nodes. Edit: Before rolling back, I could migrate from the 8700k to the 12700k w/o issue. So I migrated things off the 8700k and rolled it back. When I attempted to migrate from the 12700k to the 8700k so I could roll it back, they hung, so I don't think you can apply it to...
  14. R

    [Proxmox 7.2-3 - CEPH 16.2.7] Migrating VMs hangs them (kernel panic on Linux, freeze on Windows)

    I'm just a freeloader running on consumer grade hardware (except for my SSD and nic cards) and I also have the same issue. One box has an I7-12700K in it, the other an I7-8700K and migrating a machine from the 12700K to the 8700K would cause it to lock up and I'd have to SSH into the node and...
  15. R

    Poor performance over NFS

    This may or may not be related, but try installing iperf3 on your openmediavault vm, on the proxmox host, and on one of your client machines. Run iperf3 -s on your vm, then iperf3 -c vm.ip from the client and see if you are getting high retr. Run the iperf3 -s on your proxmox host and see if you...
  16. R

    Unexplainable delay in LXC container

    I just happen to create my first debian 11 container last night and noticed the same thing as well. I normally used ubuntu 20.04 containers and never noticed these issues
  17. R

    Passing RAID to Windows Server

    I did. About the only thing I didn't try was downgrading the firmware. Oh well. I'll just pass the volume. I am tired of dealing with it and that seems to work.
  18. R

    Passing RAID to Windows Server

    Potentially I might want to create 15+ drive storage spaces based pool. Then I'd need to pass the controller or hba (I'd just get the hba, would be silly to just use the expensive raid card for jbod). I thought about doing this, but I don't think I will. Yes, I know about ZFS, I just don't want...
  19. R

    Passing RAID to Windows Server

    There isn't a *need* to pass it to windows. At least, as long as I use it as RAID and not JBOD. (You can only pass 15 scsi disks to a VM). It works in Windows 10. It doesn't work in Windows Server. While the volume is empty right now, it isn't a big deal. Understanding the root of the problem...