Recent content by grepler

  1. G

    Achieving Isolation for Sec (vm->vm & vm->Hyper)

    Use PVE Firewall and (or, but preferably and) separate VLAN/networks for physical hosts and VMs. https://pve.proxmox.com/pve-docs/chapter-pve-firewall.html
  2. G

    rbind-ing a zfs mountpoint for LXC containers not working as expected

    question - how are you enabling samba-alpine feature in an unprivileged container? I'm unfamiliar with that feature module, but it looks like privileged is required from the proxmox docs. I am running my Samba server without this feature enabled, without any issues in an unprivileged container...
  3. G

    rbind-ing a zfs mountpoint for LXC containers not working as expected

    I just ran a test on my dev cluster (running the latest non-commercial build of Proxmox) and my LXC recursive bind works just fine. I created a new set of recursive ifs datasets like so: root@S1-NAS01:~# zfs list NAME USED AVAIL REFER...
  4. G

    rbind-ing a zfs mountpoint for LXC containers not working as expected

    Hi @EpicBeagle, I'm actually still running a 6.x version on the host which is supporting my recursive bind, so I can't speak to the regression... in fact I have to plan my upgrade process even more carefully now, thanks for the heads-up! Your lxc config line is is almost the same as my working...
  5. G

    VXLAN SDN Network with MTU <1280 means Containers and VMs cannot start

    Thanks Spirit, I've manually configured the mtu to 1230 on the containers, with the VXLAN set to 1280 and it works nicely for now.
  6. G

    VXLAN SDN Network with MTU <1280 means Containers and VMs cannot start

    I have deployed a VXLAN setup on my homelab cluster and I can get connectivity between containers on various hosts, as long as the MTU on the VXLAN zone is greater than or equal to 1280 (the minimum size of MTU in IPv6). My intended final state is one where the VXLAN networking is encapsulated...
  7. G

    PBR Remote Sync - Limiting the sync

    For any future users, I did in fact SSH from my off-site server to my primary backup server, copied the /vm and /ct folders in the datastore to my new server, setup a more aggressive prune rule and then started replication. It starts transferring only the newest images, and once you have a few...
  8. G

    PBR Remote Sync - Limiting the sync

    perhaps I could fool it by copying some of these directories? even if the verifications fail and I delete them afterwards, would that serve as a starting point for a given vm?
  9. G

    PBR Remote Sync - Limiting the sync

    Oh! that's excellent news - but how would I get the initial sync (I.E. copy the latest set of backups) to the datastore, if a full sync will fill the destination drives? @dietmar
  10. G

    PBR Remote Sync - Limiting the sync

    One method I was considering (but it means I lose some de-duplication) is to have a 'Target' datastore and a 'Deep Archive' datastore at the primary site. PVEs backup to the target, which keeps the last 2-5, then does a local sync to Deep Archive. then offsite runs a daily sync job to pull from...
  11. G

    PBR Remote Sync - Limiting the sync

    Our main backup server has 20+TBs of available storage (our colo server), while our offsite only has 12TB (public bare metal cloud). Since I can't easily get additional storage at my offsite location and I only need to offsite store a smaller set of last-resort backups (say, the 5 most recent...
  12. G

    Cluster moving VM's around

    a) You should be able to use the 'Migrate' button. I believe there is also a Bulk Migrate option available. Generally you will want to setup a replication schedule for each VM is you are using ZFS local storage, or use shared storage like NFS or Ceph to allow faster live migrations. We do it all...
  13. G

    Bind mount not including files in nested ZFS datasets

    Update: I've since replaced my Fileserver VMs with containers, which generally have a much happier time with mounting nested datasets. the VM<->9P mount performance I was getting just didn't meet my needs, unfortunately. The guide referenced by Ricardo worked really well here is the LXC...
  14. G

    Proxmox 6.2 NVMe Character Appears, Block Devices missing

    Thanks to kind assistance over on the STH forum, I was able to get these drives out of Diagnostic mode. once you have dm-cli, you can use these commands to identify the device and bring it out of diagnostic mode. Note that the status change will not happen until you shutdown the system. dm-cli...
  15. G

    Proxmox 6.2 NVMe Character Appears, Block Devices missing

    @wolfgang I had the same issue happen on two other SSDs in the same host, and it appears that the devices are not being shut down correctly. I captured the following photograph as the server was shutting down: The devices also appear to be diagnostic mode, as I connected them to a Windows...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!