I can ping 10.30.0.1 with no issue. It's when a host in 192.168.0.0/23 (VLAN 192) tries to ping one of the VMs that's also on the node with the VPN gateway (the VM's default GW). If I tcpdump the tap interface on that node for the target VM, I...
I found the cause.
It seems that setting flags to "enforce", "hv_relaxed", etc. causes a regex error.
If I set it to "+hv_relaxed", the regex doesn't cause an error, but the qemu command still causes an error.
qm config 1033001
file...
I wouldn't make my original post too large wth semi-relevant detals. I work in IT and this is definitely not something I'd do for a client in a production environment, but with the equipment I have, this is what I'm able to do at the moment. And...
Still having "got timeout" failures during zfs scrub. Since it amounts to lots of emails every month, I went poking around for a "timeout" to change. Best guess so far is a patch to /usr/share/perl5/PVE/Storage/ZFSPoolPlugin.pm:
I adjusted the...
Any solution is use case dependent, which is why this is left for you (the operator) to define, and you can find multiple documents making what seem to be antagonistic recommendations.
more pgs/OSD mean more granularity, meaning better seek...
Hi everyone.
So I'm new to both PVE and PBS, but have experience with virtualization. Quick backstory is that this weekend, I moved my small home lab/server setup from just a Windows 11 machine with a couple of VMs running through VMware...
Not really obsessing, just trying to understand the reasoning behind ~30, 100, and 200 PG per OSD. Whether or not the above “solution” is a bug or should be clarified in the Proxmox documentation is maybe an open question.
The issues you...
You're concerned with optimal PG count when your cluster is lopsided. you have two nodes with HDDs, two nodes with a lot of SSD and 2 nodes with too little. any HDD device class rule would not be able to have a replication:3 rule, and an SSD...
Correct. The backup files (just like templates) are tar archives, which by default don't include extended attributes (which is how capabilities are stored). There also seems to be some disagreement as to how they're supposed to be stored in...
Ah, good to know, yet you haven't written, that you checked with the VMware default SCSI controller, have you?
Try to boot from a recovery boot medium (e.g. install iso) and try to regenerate the initramfs, that should work.
You will get connections to your VMs, vlan-aware is not really required, from Proxmox docs:
VLAN awareness on the Linux bridge:In this case, each guest’s virtual network card is assigned to a VLAN tag,which is transparently supported by the...
Another update to the issue i just went to zotac service center to get my rtx 4060 replaced and the new one (same model) just worked like the old one did for a while so i am guessing the old one got fault over time of getting used. So i suggest...
If you are using VLANs and don't set your vmbr to vlan-aware, you won't have any connection on your VMs, not just connection issues.
Did you get the packet loss only for connections toward the Internet or also on your local network?
MTU size...
Hi, I'm not sure if I fully understand your issue.
If you have a VM in VLAN30, on the same node as your VPN GW, which you are using as Default GW for that VM, you can't ping 10.30.0.1, correct?
Can you run a tcpdump on the node and also on the...
I believe my issue was due to not having the host vmbr set to vlan-aware, altho it functioned fine until the last couple of updates.
Will report back if it stays stable