>>E: Failed to fetch https://enterprise.proxmox.com/debian/ceph-squid/dists/trixie/InRelease 401 Unauthorized [IP: 66.70.154.82 443]
if you don't have an enterprise subscription, you need to configure no subscription repositories...
dp you mean switching from vmbrX 1500 --> vmbrY 1450 with the vm online ? if yes, I think currently it's only unplug/replug the tap interface on the host side, but I'm not sure it's changing the value inside the guest. (The only way is to...
10 year ago, ovs had features that linux bridge didn't have (vlan-aware for example). But today, maybe the only interesting feature they have is port-mirroring.
The whole sdn stack don't use ovs at all (including for vxlan,evpn,...).
I don't...
use the vmbrX directly . (Not sure why it's vlan4095 on vmware ? is it hardcorded 4095 trick in vmware to allow all vlans?)
no
Both are the same, choose what you prefer. (sdn allow more complex setup like vxlan,evpn,... but for simple vlan...
"bridge-vids" are the list of allowed vlans in the bridge. (by default it's 2-4096 to allow all vlans). If you add a vm interface with a specific tag vlan, it'll not work if the vlan is not also defined in bridge-vids.
By default, if host don't use 80% memory, the balloon driver is using 0 memory. (so the vm is using the max memory).
and when the host is > 80% memory, pvestatd deamon try to increase the balloon a little bit on each vm to decrease host memory...
when your host is reaching 80% memory, the balloon driver of each vm where it's running is increasing . The memory take by the balloon drivers, is gived back to the hypervisor. Only linux, you don't see it, it's like the memory of the vm is...
The results seem quite low indeed. Qemu itself is able to reach 200~300k iops with 1 core. (I'm working to add support for mutiple iothread to increase the performance up to 600kiops by disk, but you are far from reaching the current cpu limit)...
Hi,thanks for the report. I have send patch to the mailing recently to improve the performance of secure erase, it's not yet commited. But I didn't notice this bug with volume activation. I'll look at it next week. (and I'll forward it the dev...
Hi @dchalon , welcome to the forum.
I believe you will need to clone that new VM as full clone. There is no unlink operation.
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
never tried it but :
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/chap-kvm_guest_timing_management
To enable the PHC device, do the following on the guest OS:
Set...
It's quite possible that your are cpu limited, as currently vm can only use 1 core for 1 virtual disk.
Multiple threading (with multiple iothread by disk), should be available soon, I have already send patch to the proxmox dev mailing list.
more than one OSE per nvme will not help. non PLP are really like 500iops 4k write vs 20000iops for plp drive.
At minimum, use cache=writeback, it should help to avoid small write when possible (merging small adjacent writes to big write)
After months of hard work and collaboration with our community, we are thrilled to release the beta version of Proxmox Datacenter Manager. This version is based on the great Debian 13 "Trixie" and comes with a 6.14.11 Kernel as stable default and...