I started a thread about this before I thought to check this one. However,
Seems someone can repo it even with zfs_dmu_offset_next_sync=0.
https://github.com/openzfs/zfs/issues/15526#issuecomment-1826065538
Seems this might be the fix...
I just stumbled over this and I was wondering how it relates to the versions of zfs here in the proxmox kernel? It seems like there is a bug that's been around that seemed a while that was brought to light with the latest block cloning code, or maybe they are 2 different bugs, I don't really...
So far, seems to have resolved my migration issue:
https://forum.proxmox.com/threads/opt-in-linux-5-19-kernel-for-proxmox-ve-7-x-available.115090/post-499008
I rolled back both nodes.
Edit: Before rolling back, I could migrate from the 8700k to the 12700k w/o issue. So I migrated things off the 8700k and rolled it back. When I attempted to migrate from the 12700k to the 8700k so I could roll it back, they hung, so I don't think you can apply it to...
I'm just a freeloader running on consumer grade hardware (except for my SSD and nic cards) and I also have the same issue. One box has an I7-12700K in it, the other an I7-8700K and migrating a machine from the 12700K to the 8700K would cause it to lock up and I'd have to SSH into the node and...
This may or may not be related, but try installing iperf3 on your openmediavault vm, on the proxmox host, and on one of your client machines.
Run iperf3 -s on your vm, then iperf3 -c vm.ip from the client and see if you are getting high retr. Run the iperf3 -s on your proxmox host and see if you...
I just happen to create my first debian 11 container last night and noticed the same thing as well. I normally used ubuntu 20.04 containers and never noticed these issues
I did. About the only thing I didn't try was downgrading the firmware.
Oh well. I'll just pass the volume. I am tired of dealing with it and that seems to work.
Potentially I might want to create 15+ drive storage spaces based pool. Then I'd need to pass the controller or hba (I'd just get the hba, would be silly to just use the expensive raid card for jbod). I thought about doing this, but I don't think I will. Yes, I know about ZFS, I just don't want...
There isn't a *need* to pass it to windows. At least, as long as I use it as RAID and not JBOD. (You can only pass 15 scsi disks to a VM).
It works in Windows 10. It doesn't work in Windows Server. While the volume is empty right now, it isn't a big deal. Understanding the root of the problem...
Everything works great passing the controller to Windows 10. It all falls apart passing the controller to Windows Server 2016/2019.
Inside the Windows 10 VM, I was able fill 75% of the new volume by copying data from a 2nd volume on the same controller. Inside Server VM, I can't even format the...
This is more of a question, not saying anything is wrong with proxmox.
I am able to pass thru an Areca 1882ix-24 to Windows 10 and use the latest Areca driver (6.20.00.33) without any issue.
If I attempt to pass this card thru to a Windows 2016 or even Windows 2019, I start having all kinds of...
I have an Areca raid controller that I was going to pass to the VM, but I'm having issues with it the VM not being able to reboot because it isn't "releasing" the card. I am going to look into blacklisting it on the host to see if that makes a difference, but I thought, maybe I'd just pass the...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.