On my ProxMox servers, I have two NICs for user traffic. They are configured as br0 (192.168.90.x) and br1 (192.168.100.x)
Ideally I would love to bond the two NICS into one interface and connect them to two separate switches for physical redundancy. But I was not able to get this working when...
The Ceph network is on a pair of 25G LACP bond. However, I don't recall ever seeing transfers go above 25G. It might be the way my network guys set things up or we're doing something wrong :D
I'd think it should be safe with 100G network based on my personal experience. My cluster average of 10~15Gbps read traffic from 48 OSD on 12 NVMe across 5 nodes.
Ultimately, iit depends on your usage. Most of the VMs in my cluster are not highly active but when busy, my 25G Ceph private...
I have the same board Supermicro H13SAE-MF board with the Intel i210 NIC for a new server and resolved the problem based on the post by @proteus
No changes to BIOS or other settings, only added pcie_aspm.policy=performance to kernel parameters, so far the NIC has stayed alive for >24 hours with...
Thanks for this, killing traceroute worked for me.
Strangely, the same traceroute command executed successfully when I switched to tty3 while the original was still stuck.
I encountered this issue for the first time installing ProxMox on a laptop with DHCP IP. However, this is also the first time I am installing ProxMox on other than server grade hardware.
Didn't have this problem when the network cable was unplugged. The ethernet adapter is a very common Realtek...
Users had been complaining about laggy VM performace on our HDD-based Ceph pool. With many users VMs having many applications that are logging to disk, I see a lot of small writes with write IOPS > 5x read IOPS.
Reading up here and elsewhere, it seems that Write-back may yield better...
On my ProxMox cluster, after checking with our Oracle DBA team, was to have a "storage VM" ( I believe this is what others refer to as an iSCSI portal ) that exports the drives to the two RAC nodes for use. My understanding is that Oracle ASM handles concurrent access to the data files. The...
Thanks, unfortunately, I just realized there is no free version for PBS and the subscription cost is above the target budget after accounting for the remote storage costs. So I need to look at alternatives.
I'm looking into implementing additional backups for a basic Proxmox host (no Ceph/SAN). Currently it's just rsync to another physical non-Proxmox machine and that machine died thus prompting a look into additional offsite backup.
The initial plan was to backup to a remote storage like...
I think this is the same approach I found previously, which requires multiple QXL devices to added to the VM but ProxMox by default doesn't support this functionality.
p.s. I am blind, missed the part in the doc where it teaches how to enable multiple QXL. The option is to switch the Display type
Thanks, this looks interesting but seems to be Windows only. However, this gave me a direction to look into and found some other tools that work on Linux.
A few like RemotePC and Nomachine are commercial software but this free and OSS app https://remmina.org/ had been reported on reddit to work...
Has anybody tried or is it possible to run a VM that uses multiple monitors for extended desktop? i.e. able to drag an application window from one screen to another. In normal kvm/libvirt, it seems that this is possible by adding QXL devices. However, the ProxMox interface does not appear to...
I encountered this issue after my active manager crashed. The active manager apparently restarted and continued being active but I guess wasn't all good. Stopping the active manager and waiting for the standby to take over resolved this problem. After that, I started the original active manager...
Thanks for the reply, that is basically what I did after discovering that qemu64 enables the lafh-lm flag. I changed the custom CPU model from reporting as kvm64 to qemu64 and all the flags for x86-64v2 showed up correctly inside the VM.
Using qm showcmd when the CPU is set to default kvm64, I see that the lafh_lm flag is actually configured already:
-cpu kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep \
lscpu inside VM
fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht...
There is no error. The VM can be started up but when I check the output of lscpu, there lafh-lm flag is missing. The rest of the additional flags appear as expected.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.