Search results

  1. R

    ZFS block cloning/dnode_is_dirty bug

    I think that block_cloning needs to be enabled with zpool upgrade on existing pools which is why you have different results.
  2. R

    Opt-in Linux 6.5 Kernel with ZFS 2.2 for Proxmox VE 8 available on test & no-subscription

    I started a thread about this before I thought to check this one. However, Seems someone can repo it even with zfs_dmu_offset_next_sync=0. https://github.com/openzfs/zfs/issues/15526#issuecomment-1826065538 Seems this might be the fix...
  3. R

    ZFS block cloning/dnode_is_dirty bug

    I just stumbled over this and I was wondering how it relates to the versions of zfs here in the proxmox kernel? It seems like there is a bug that's been around that seemed a while that was brought to light with the latest block cloning code, or maybe they are 2 different bugs, I don't really...
  4. R

    [Proxmox 7.2-3 - CEPH 16.2.7] Migrating VMs hangs them (kernel panic on Linux, freeze on Windows)

    So far, seems to have resolved my migration issue: https://forum.proxmox.com/threads/opt-in-linux-5-19-kernel-for-proxmox-ve-7-x-available.115090/post-499008
  5. R

    Opt-in Linux 5.19 Kernel for Proxmox VE 7.x available

    It looks like it has resolved my migration issues to/from an i7-12700K and i7-8700K machine.
  6. R

    [Proxmox 7.2-3 - CEPH 16.2.7] Migrating VMs hangs them (kernel panic on Linux, freeze on Windows)

    5.15.53-1-pve doesn't work for me. Migrating a VM from an i7-12700K to an i7-8700K did the typical 100% cpu thing. Back to 5.15.39-3-pve-guest-fpu
  7. R

    [Proxmox 7.2-3 - CEPH 16.2.7] Migrating VMs hangs them (kernel panic on Linux, freeze on Windows)

    I rolled back both nodes. Edit: Before rolling back, I could migrate from the 8700k to the 12700k w/o issue. So I migrated things off the 8700k and rolled it back. When I attempted to migrate from the 12700k to the 8700k so I could roll it back, they hung, so I don't think you can apply it to...
  8. R

    [Proxmox 7.2-3 - CEPH 16.2.7] Migrating VMs hangs them (kernel panic on Linux, freeze on Windows)

    I'm just a freeloader running on consumer grade hardware (except for my SSD and nic cards) and I also have the same issue. One box has an I7-12700K in it, the other an I7-8700K and migrating a machine from the 12700K to the 8700K would cause it to lock up and I'd have to SSH into the node and...
  9. R

    Poor performance over NFS

    This may or may not be related, but try installing iperf3 on your openmediavault vm, on the proxmox host, and on one of your client machines. Run iperf3 -s on your vm, then iperf3 -c vm.ip from the client and see if you are getting high retr. Run the iperf3 -s on your proxmox host and see if you...
  10. R

    Unexplainable delay in LXC container

    I just happen to create my first debian 11 container last night and noticed the same thing as well. I normally used ubuntu 20.04 containers and never noticed these issues
  11. R

    Passing RAID to Windows Server

    I did. About the only thing I didn't try was downgrading the firmware. Oh well. I'll just pass the volume. I am tired of dealing with it and that seems to work.
  12. R

    Passing RAID to Windows Server

    Potentially I might want to create 15+ drive storage spaces based pool. Then I'd need to pass the controller or hba (I'd just get the hba, would be silly to just use the expensive raid card for jbod). I thought about doing this, but I don't think I will. Yes, I know about ZFS, I just don't want...
  13. R

    Passing RAID to Windows Server

    There isn't a *need* to pass it to windows. At least, as long as I use it as RAID and not JBOD. (You can only pass 15 scsi disks to a VM). It works in Windows 10. It doesn't work in Windows Server. While the volume is empty right now, it isn't a big deal. Understanding the root of the problem...
  14. R

    Passing RAID to Windows Server

    Everything works great passing the controller to Windows 10. It all falls apart passing the controller to Windows Server 2016/2019. Inside the Windows 10 VM, I was able fill 75% of the new volume by copying data from a 2nd volume on the same controller. Inside Server VM, I can't even format the...
  15. R

    Passing RAID to Windows Server

    This is more of a question, not saying anything is wrong with proxmox. I am able to pass thru an Areca 1882ix-24 to Windows 10 and use the latest Areca driver (6.20.00.33) without any issue. If I attempt to pass this card thru to a Windows 2016 or even Windows 2019, I start having all kinds of...
  16. R

    [SOLVED] Expanding Volume Passed to VM

    Ah, thanks. I search around, but I missed the obvious.
  17. R

    [SOLVED] Expanding Volume Passed to VM

    I have an Areca raid controller that I was going to pass to the VM, but I'm having issues with it the VM not being able to reboot because it isn't "releasing" the card. I am going to look into blacklisting it on the host to see if that makes a difference, but I thought, maybe I'd just pass the...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!