I did not dismiss anything, I just try to understand your odd accusations, given that nothing changed for your existing PVE subscriptions.
You still get exactly the same value of our lowest subscription tier you choose to pay for, nothing more...
Well you could also think that it's bargain for enterprise environments that it's enough to have a support subscription for all your PBS and PVE nodes to have PDM without the nag and access to the enterprise repo instead of having to pay for PDM...
Same here. Using VE 9 with an HP DL 380 Gen10 and a RAID-Controller HPE MR416i-p Gen10+
May thats related to this: https://bugzilla.kernel.org/show_bug.cgi?id=220693
Rollback to 6.14 resolved it for now.
Sorry, I misunderstood. In that case you can leave it.
Sure. So Proxmox uses the good default of 16k volblocksize.
That means that all your VMs ZFS raw disks, are offered 16k blocks.
Now lets look how ZFS provides 16k blocks.
You have RAIDZ1...
I would recommend you to do
- Backup the VMs
- Backup your Proxmox settings
- destroy, reinstall Proxmox
- use mirrors!
- reimport the VMs
I would not use 5 drives as a RAIDZ1. With the default of 16k volblocksize you only get 66% and not your...
Same here, but with fully updated PBS 4.1 and enterprise repos. The server is an HP DL360 Gen10+, and the RAID controller an HP MR416i-a Gen10+.
When the server hits the issue, we have to do a hard reset since some processess get stuck in D...
I just did this yesterday, using the Proxmox GUI. YMMV I suppose, but since I added the new drives first, what I did was:
Out the OSD. This will start to rebalance in the background but all PGs remain blue or green. (blue I believe was due to...
This is what I've already suggested in my original post:
Yesterday evening I made some waves to finally get some attention of the maintainers (I've tagged them at least few times in the manner: "maintainers may decide" - last attempt was in...
Perhaps the Proxmox team could consider building its own version of virtio drivers with its own installer, signed by Proxmox. This would allow for faster patch deployment, as is done for other subsystems (ZFS, etc.).
@fweber
Hey all, I'm sure a few of you have seen the plugin we are developing over Reddit, but I wanted to make a formal post on the forum here to hopefully get more testers and feedback.
The plugin, source code, and documentation is all available here...
And @fweber, of course I understand the omnipresent DEV buzz, so all is OK, just please, there should be some regular "bumps" in the rising threads from the Proxmox staff. We, as a community, are quite mighty and capable, but definitely not...
Hi! For now, I'm focusing on the VirtIO SCSI (and, apparently also Block) problems with 0.1.285 reported here.
@RoCE-geek, thank you for your in-depth investigation of this bug, reporting this issue upstream, and providing a simple fio...
Hey everyone,
A few recent developments prompted us to examine QCOW2’s behavior and reliability characteristics more closely:
1. Community feedback
There are various community discussions questioning the reliability of QCOW2. We have customers...
Hi @Nathan Stratton and all,
You need clear guidance here: do not do that unless you have a very compelling reason to.
a) Your hardware is discontinued and past the end of service, which significantly increases the likelihood of component...
Hi all,
I come back to confirm that the "miss configuration" on my switch was the problem....
Now I have my 2 nodes cluster that is rocking nice.....
I have, meanwhile, learned a lot during this adventure.....
Now, able to remove nodes...
Github report is here: virtio issue 1453 - Read-retry errors on Windows Server 2025 with SQL Server (0.1.285+)
But please note that the network-related problems are still there (although I'm not sure if I'm affected), as it was out of my focus...
As I still wanna help you, this is the output of my quick GPT-5 Thinking session, regarding reports in this thread and the corresponding netkvm changes
Below is a focused delta review of NetKVM changes between those two points, what most...