Latest activity

  • J
    I can ping 10.30.0.1 with no issue. It's when a host in 192.168.0.0/23 (VLAN 192) tries to ping one of the VMs that's also on the node with the VPN gateway (the VM's default GW). If I tcpdump the tap interface on that node for the target VM, I...
  • M
    I found the cause. It seems that setting flags to "enforce", "hv_relaxed", etc. causes a regex error. If I set it to "+hv_relaxed", the regex doesn't cause an error, but the qemu command still causes an error. qm config 1033001 file...
  • P
    I wouldn't make my original post too large wth semi-relevant detals. I work in IT and this is definitely not something I'd do for a client in a production environment, but with the equipment I have, this is what I'm able to do at the moment. And...
  • M
    Still having "got timeout" failures during zfs scrub. Since it amounts to lots of emails every month, I went poking around for a "timeout" to change. Best guess so far is a patch to /usr/share/perl5/PVE/Storage/ZFSPoolPlugin.pm: I adjusted the...
  • K
    Thanks very much for you solution <3!!
  • A
    Any solution is use case dependent, which is why this is left for you (the operator) to define, and you can find multiple documents making what seem to be antagonistic recommendations. more pgs/OSD mean more granularity, meaning better seek...
  • P
    Hi everyone. So I'm new to both PVE and PBS, but have experience with virtualization. Quick backstory is that this weekend, I moved my small home lab/server setup from just a Windows 11 machine with a couple of VMs running through VMware...
  • S
    Not really obsessing, just trying to understand the reasoning behind ~30, 100, and 200 PG per OSD. Whether or not the above “solution” is a bug or should be clarified in the Proxmox documentation is maybe an open question. The issues you...
  • A
    You're concerned with optimal PG count when your cluster is lopsided. you have two nodes with HDDs, two nodes with a lot of SSD and 2 nodes with too little. any HDD device class rule would not be able to have a replication:3 rule, and an SSD...
  • A
    Ashford reacted to cyberoot's post in the thread [SOLVED] nfs mount error with Like Like.
    The solution is go to the Synology, edit folder NFS you share, go to tab NFS permision and add all IP Proxmox Host Server.
  • LnxBil
    Sorry, I don't install stuff by curling it into bash and I don't see the point in using it. PBS does all of this for me already.
  • S
    sthames42 reacted to wbumiller's post in the thread LXC Capabilities with Like Like.
    Correct. The backup files (just like templates) are tar archives, which by default don't include extended attributes (which is how capabilities are stored). There also seems to be some disagreement as to how they're supposed to be stored in...
  • LnxBil
    LnxBil replied to the thread Migrate VM from vSphere to PVE.
    Ah, good to know, yet you haven't written, that you checked with the VMware default SCSI controller, have you? Try to boot from a recovery boot medium (e.g. install iso) and try to regenerate the initramfs, that should work.
  • LnxBil
    LnxBil reacted to SteveITS's post in the thread Proxmox cluster Kills my ssd? with Like Like.
    Check eBay, there are often enterprise drives available. Just check the model numbers of what is being sold.
  • F
    You will get connections to your VMs, vlan-aware is not really required, from Proxmox docs: VLAN awareness on the Linux bridge:In this case, each guest’s virtual network card is assigned to a VLAN tag,which is transparently supported by the...
  • V
    Another update to the issue i just went to zotac service center to get my rtx 4060 replaced and the new one (same model) just worked like the old one did for a while so i am guessing the old one got fault over time of getting used. So i suggest...
  • E
    emanuelx replied to the thread Proxmox cluster Kills my ssd?.
    Thank's for the tip, like you said, people have different opinions and let me be confuse but at the same time I will learn more about the system.
  • D
    If you are using VLANs and don't set your vmbr to vlan-aware, you won't have any connection on your VMs, not just connection issues. Did you get the packet loss only for connections toward the Internet or also on your local network? MTU size...
  • D
    Hi, I'm not sure if I fully understand your issue. If you have a VM in VLAN30, on the same node as your VPN GW, which you are using as Default GW for that VM, you can't ping 10.30.0.1, correct? Can you run a tcpdump on the node and also on the...
  • F
    I believe my issue was due to not having the host vmbr set to vlan-aware, altho it functioned fine until the last couple of updates. Will report back if it stays stable