Search results

  1. R

    [SOLVED] Rsync on LXC slow

    I am going to close this one out and start a new thread in the correct forum now that I see it is an LXC memory issue.
  2. R

    [SOLVED] Rsync on LXC slow

    Update on findings with help from a fellow Redditor.. Looks like in the LXC as rsync runs, the OS is allowing the cahe to drop the free memory to zero and essentially causing a self denial of service. Running a VM with the same amount of RAM, the free memory drops to around 110KB but never...
  3. R

    [SOLVED] Rsync on LXC slow

    To add after more testing... It appears that initially the transfer starts off at the same rate as the VM but then drops off to kiloByte speeds..
  4. R

    [SOLVED] Rsync on LXC slow

    So I have a file server setup in an LXC with 1TB of storage allocated backed by 1 3TB nvme drive. I have an offsite backup running on a Pi 4 with a 2TB USB3 drive attached and connectivity between the two using ZeroTIer. I am testing some basic restores and transfer rates using rsync in to the...
  5. R

    Vlan 1000 not working

    A reddit user pointed me in the right direction as since in the linux bridge mode, there is a tag being added to the physical nic name, the name went to 16 characters. There is a confirmed 15 character limit on linux interface names. My setup is as described in that first link where the vlan is...
  6. R

    Vlan 1000 not working

    I wasn't adding it to the enp9s0f0np0 interface. I was adding it to nic in the VM config.
  7. R

    Vlan 1000 not working

    On a recommendation from a Reddit user, moved to OVS and it is working.
  8. R

    Vlan 1000 not working

    A bit more digging and I see the below in "ip addr", but I do not have vlan 1000 defined anywhere right now. 19: vmbr1v1000: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 76:39:98:bb:18:5c brd ff:ff:ff:ff:ff:ff
  9. R

    Vlan 1000 not working

    My network has a vlan 1000 defined in it. My Proxmox 10Gb NIC is setup as a bridge and not vlan aware. I am adding the vlan tag to the nic device. If I define the vlan as 1000, the VM fails to start. If I drop it down to 999, it starts with no issue. Doesn't matter what NIC type I am trying to...
  10. R

    GUI only node

    Just wanted to circle back around on this setup. So I have a two node cluster, which actually hosts vm's, and then added a vm as a management node. By strategic naming, when I now doing a task which requires choosing a host, I always see either the first or second node as the first choice...
  11. R

    GUI only node

    Thanks for the link. Looks promising but the GUI still just connects to a single node, so if it is rebooted, then you are back to the same issue as logging into a node directly. I might play around with the proxmox GUI setup on an OS and just not put any VM's there. Will probably take a bit of...
  12. R

    GUI only node

    Yes, this is what I have tried. All of my storage backing is via NFS, so even that gets attached when adding to the cluster. Yes, other setups have a VM or bare metal server that runs which has no capability for hosting VM's. They are strictly for management of host nodes, HA, and other...
  13. R

    GUI only node

    So one of the issues I have with Proxmox is the lack of a GUI that is accessible no matter which node is rebooted. For instance, is I happen to log into node 1, of three nodes, and I reboot that node, I then have to go through all the hassle of bringing up a new brower connection to another node...
  14. R

    No migration on reboot of node

    Forgot the version. Virtual Environment 6.0-15
  15. R

    No migration on reboot of node

    So I have just setup a 3 node cluster and when I reboot a node, the VM's on it shutdown instead of migrating. I do have the shutdown_policy set to failover, and the VM's set as a resource under HA. What am I missing? Manual migration works as it should and the VM's disk is on a shared NFS...
  16. R

    Migrate to new server/shared storage/no ha

    Thanks for the info. That is exactly what I was looking for.
  17. R

    Migrate to new server/shared storage/no ha

    So I am getting ready to move to a new server and have around 10 or so machines. All of them are qemu based and use NFS storage. I am not looking to setup ha between the current server and the new server and am wondering the best way to migrate everything. Hopefully without having to recreate...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!