Recent content by grepler

  1. VXLAN SDN Network with MTU <1280 means Containers and VMs cannot start

    Thanks Spirit, I've manually configured the mtu to 1230 on the containers, with the VXLAN set to 1280 and it works nicely for now.
  2. VXLAN SDN Network with MTU <1280 means Containers and VMs cannot start

    I have deployed a VXLAN setup on my homelab cluster and I can get connectivity between containers on various hosts, as long as the MTU on the VXLAN zone is greater than or equal to 1280 (the minimum size of MTU in IPv6). My intended final state is one where the VXLAN networking is encapsulated...
  3. PBR Remote Sync - Limiting the sync

    For any future users, I did in fact SSH from my off-site server to my primary backup server, copied the /vm and /ct folders in the datastore to my new server, setup a more aggressive prune rule and then started replication. It starts transferring only the newest images, and once you have a few...
  4. PBR Remote Sync - Limiting the sync

    perhaps I could fool it by copying some of these directories? even if the verifications fail and I delete them afterwards, would that serve as a starting point for a given vm?
  5. PBR Remote Sync - Limiting the sync

    Oh! that's excellent news - but how would I get the initial sync (I.E. copy the latest set of backups) to the datastore, if a full sync will fill the destination drives? @dietmar
  6. PBR Remote Sync - Limiting the sync

    One method I was considering (but it means I lose some de-duplication) is to have a 'Target' datastore and a 'Deep Archive' datastore at the primary site. PVEs backup to the target, which keeps the last 2-5, then does a local sync to Deep Archive. then offsite runs a daily sync job to pull from...
  7. PBR Remote Sync - Limiting the sync

    Our main backup server has 20+TBs of available storage (our colo server), while our offsite only has 12TB (public bare metal cloud). Since I can't easily get additional storage at my offsite location and I only need to offsite store a smaller set of last-resort backups (say, the 5 most recent...
  8. Cluster moving VM's around

    a) You should be able to use the 'Migrate' button. I believe there is also a Bulk Migrate option available. Generally you will want to setup a replication schedule for each VM is you are using ZFS local storage, or use shared storage like NFS or Ceph to allow faster live migrations. We do it all...
  9. Bind mount not including files in nested ZFS datasets

    Update: I've since replaced my Fileserver VMs with containers, which generally have a much happier time with mounting nested datasets. the VM<->9P mount performance I was getting just didn't meet my needs, unfortunately. The guide referenced by Ricardo worked really well here is the LXC...
  10. Proxmox 6.2 NVMe Character Appears, Block Devices missing

    Thanks to kind assistance over on the STH forum, I was able to get these drives out of Diagnostic mode. once you have dm-cli, you can use these commands to identify the device and bring it out of diagnostic mode. Note that the status change will not happen until you shutdown the system. dm-cli...
  11. Proxmox 6.2 NVMe Character Appears, Block Devices missing

    @wolfgang I had the same issue happen on two other SSDs in the same host, and it appears that the devices are not being shut down correctly. I captured the following photograph as the server was shutting down: The devices also appear to be diagnostic mode, as I connected them to a Windows...
  12. Proxmox Backup Server 1.0 (stable)

    I'm running my test system on an encrypted zfs dataset, no issues so far, the PBS system just sees it as a dedicated directory. Of course I need to manually unlock the dataset whenever the server is rebooted, but that's a feature, and how often should a backup server need to be rebooted, anyway?
  13. QEMU - Can we Add emulated NVME Devices to Guests?

    Honestly, I'm just trying to cut down on guest latency, the guest I'm currently thinking about is a Windows server 2019 OS running SQLite databases. I generally see 0.5-1.4ms latencies on the SCSI drives and I'm just trying to eke out more performance. Looking at the server's iowait, the NVMe...
  14. QEMU - Can we Add emulated NVME Devices to Guests?

    My servers all run HGST260 enterprise PCIe NVMe drives, mirrored. The drives have great performance, but my guests seem to be limited by queue depth. Are we able to use emulated NVMe devices to increase the parallelization of disk IO, and would this help relative to the standard SCSI devices? I...
  15. [SOLVED] Updated bios now stuck on reading all physical volumes...

    Hi @thierrybla, can you describe your solution in a little more detail? I think I am experiencing a related issue with NVMe devices not appearing: https://forum.proxmox.com/threads/proxmox-6-2-nvme-character-appears-block-devices-missing.77950/#post-345850

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!